go-skynet/go-ggml-transformers. cpp repos. json to correct this. 8 points higher than the SOTA open-source LLM, and achieves 22. 5B parameter models trained on 80+ programming languages from The Stack (v1. OpenLLaMA is an openly licensed reproduction of Meta's original LLaMA model. txt","contentType. StarCoder is a new AI language model that has been developed by HuggingFace and other collaborators to be trained as an open-source model dedicated to code completion tasks. Note: The above table conducts a comprehensive comparison of our WizardCoder with other models on the HumanEval and MBPP benchmarks. MNIST prototype of the idea above: ggml : cgraph export/import/eval example + GPU support ggml#108. From this release the default behavior of images has changed. StarCoder is an LLM designed solely for programming languages with the aim of assisting programmers in writing quality and efficient code within reduced time frames. ggml-stable-vicuna-13B. /starcoder, so i think it's safe to say that it'd behave the same on the underlying ggml)bigcode/tiny_starcoder_py is a 159M parameter model that runs on 2GB GPU and can generate python code. Are you tired of spending hours on debugging and searching for the right code? Look no further! Introducing the Starcoder LLM (Language Model), the ultimate. 1. ; model_file: The name of the model file in repo or directory. C++ code works fine natively but it is not working when being called from Python. cpp project, ensuring reliability and performance. 1. JSONFormer. 20. Falcon LLM 40b. StarCoderBase is trained on 1 trillion tokens sourced from The Stack (Kocetkov et al. An interesting aspect of StarCoder is that it's multilingual and thus we evaluated it on MultiPL-E which extends HumanEval to many other languages. It can process larger input than any other free. {"payload":{"allShortcutsEnabled":false,"fileTree":{"models":{"items":[{"name":". 1 GB. I was then able to run dalai, or run a CLI test like this one: ~/dalai/alpaca/main --seed -1 --threads 4 --n_predict 200 --model models/7B/ggml-model-q4_0. MPT-30B (Base) MPT-30B is a commercial Apache 2. cpp: Golang bindings for GGML models ; smspillaz/ggml. cpp, bloomz. LLaMA and Llama2 (Meta) Meta release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. . 48It can be effortlessly implemented as a substitute, even on consumer-grade hardware. ; If you are on Windows, please run docker-compose not docker compose and. Supercharger I feel takes it to the next level with iterative coding. Runs ggml, gguf,. TinyStarCoderPy This is a 164M parameters model with the same architecture as StarCoder (8k context length, MQA & FIM). StarCoderEx. Replit vs. This repo is the result of quantising to 4bit, 5bit and 8bit GGML for CPU inference using ggml. utils. To be. {"payload":{"allShortcutsEnabled":false,"fileTree":{"examples/gpt-2":{"items":[{"name":"CMakeLists. License: bigcode-openrail-m. 9 --temp 0. 5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. The extension was developed as part of StarCoder project and was updated to support the medium-sized base model, Code Llama 13B. 5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query. By adopting intuitive JSON for all I/O, and using reconstruction loss as the objective, it allows researchers from other. bin. ; config: AutoConfig object. And make sure you are logged into the Hugging Face hub with: ServiceNow and Hugging Face release StarCoder, one of the world’s most responsibly developed and strongest-performing open-access large language model for code generation. Currently it supports GPT-2, GPT-J, GPT-NeoX, Dolly V2, StarCoder from the examples. starcoder/README. I have updated the script to work with all the model types HF --> GGUF conversions. a957785 about 7 hours ago. 3 points higher than the SOTA open-source Code LLMs. We’re on a journey to advance and democratize artificial intelligence through open source and open science. 31{"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"StarCoderApp","path":"StarCoderApp","contentType":"directory"},{"name":"assets","path. cpp. 2), with opt-out requests excluded. You signed out in another tab or window. Falcon LLM 40b and. bigcode/the-stack-dedup. New comments cannot be posted. If the issue still occurs, you can try filing an issue on the LocalAI GitHub. txt","contentType. MPT-7B-StoryWriter-65k+ is a model designed to read and write fictional stories with super long context lengths. 1. StarCoder-Base was trained on over 1 trillion tokens derived from more than 80 programming languages, GitHub issues, Git commits, and Jupyter. . It's a 15. Python 3. This is what I used: python -m santacoder_inference bigcode/starcoderbase --wbits 4 --groupsize 128 --load starcoderbase-GPTQ-4bit-128g/model. This is a C++ example running 💫 StarCoder inference using the ggml library. Please note that these GGMLs are not compatible with llama. sudo dd if=/dev/zero of=/. Evol-Instruct is a novel method using LLMs instead of humans to automatically mass-produce open-domain instructions of various difficulty levels and skills range, to improve the performance of LLMs. main: Uses the gpt_bigcode model. The Salesforce Research team has lifted the veil on CodeGen – a new, large-scale language model built on the concept of conversational AI programming. cpp (e. As a matter of fact, the model is an autoregressive language model that is trained on both code and natural language text. pygpt4all 1. This will be handled in KoboldCpp release 1. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate. txt","contentType":"file. I converted the whisper large v2 model to ggml 👾 #753. guanaco. "The model was trained on GitHub code,". Yeah seems to have fixed dropping in ggml models like based-30b. Make a fork, make your changes and then open a PR. Args: model_path_or_repo_id: The path to a model file or directory or the name of a Hugging Face Hub model repo. TheBloke/starcoder-GGML. These files are GGML format model files for WizardLM's WizardCoder 15B 1. smspillaz/ggml-gobject: GObject-introspectable wrapper for use of GGML on the GNOME platform. 7 MB. exe -m m. 87k • 623. Text Generation • Updated Jun 20 • 10 TheBloke/mpt-30B-chat-GGML. ; lib: The path to a shared library or. GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Algorithms. The new code generator, built in partnership with ServiceNow Research, offers an alternative to GitHub Copilot, an early example of Microsoft’s strategy to enhance as much of its portfolio with generative AI as possible. PRs to this project and the corresponding GGML fork are very welcome. on May 23, 2023 at 7:00 am. Segment-Anything Model (SAM). We’re on a journey to advance and democratize artificial intelligence through open source and. StarCoder GPTeacher-Codegen Fine-Tuned This model is bigcode/starcoder fine-tuned on the teknium1/GPTeacher codegen dataset (GPT-4 code instruction fine-tuning). loubnabnl BigCode org Jun 6. 4375 bpw. 64k • 12 bigcode/starcoderbase-1b. Q&A for work. Similar to LLaMA, we trained a ~15B parameter model for 1 trillion tokens. starcoder-GGML This is GGML format quantised 4bit, 5bit and 8bit models of StarCoder. For example, inside text-generation. StarCoder is a high-performance LLM for code with over 80 programming languages, trained on permissively licensed code from GitHub. Using our publicly available LLM Foundry codebase, we trained MPT-30B over the course of 2. 5 billion. TheBloke/starcoder-GGML. The source project for GGUF. Hugging Face and ServiceNow have partnered to develop StarCoder, a new open-source language model for code. from_pretrained ("/path/to/ggml-model. The model is truly great at code, but, it does come with a tradeoff though. Text Generation Transformers PyTorch. No matter what command I used, it still tried to download it. We would like to show you a description here but the site won’t allow us. json are missing). cpp, or currently with text-generation-webui. mpt: ggml_new_tensor_impl: not enough space in the context's memory pool ggerganov/ggml#171. GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. txt","path":"examples/mpt/CMakeLists. The go-llama. 2) and a Wikipedia dataset. Before you can use the model go to hf. We would like to show you a description here but the site won’t allow us. It also significantly outperforms text-davinci-003, a model that's more than 10 times its size. below all log ` J:GPTAIllamacpp>title starcoder J:GPTAIllamacpp>starcoder. It's a 15. {"payload":{"allShortcutsEnabled":false,"fileTree":{"examples/gpt-2":{"items":[{"name":"CMakeLists. Model compatibility table. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. It's a 15. Minotaur 15B is fine-tuned on only completely open datasets making this model reproducible by anyone. cpp. In this organization you can find bindings for running. The tokenizer class has been changed from LLaMATokenizer to LlamaTokenizer. В ближайшее время ожидается, что автор добавит новые. 0 GGML These files are StarCoder GGML format model files for LoupGarou's WizardCoder Guanaco 15B V1. 57 kB add ggml about 2 months ago;LoupGarou's WizardCoder Guanaco 15B V1. As per StarCoder documentation, StarCode outperforms the closed source Code LLM code-cushman-001 by OpenAI (used in the early stages of Github Copilot ). 🤝 Contributing. Note: The reproduced result of StarCoder on MBPP. 1 For command line arguments, please refer to --help Otherwise, please manually select ggml file: Attempting to use OpenBLAS library for faster prompt ingestion. WebAssembly (WASM) support. main Starcoderplus-Guanaco-GPT4-15B-V1. GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. ; model_file: The name of the model file in repo or directory. gpt2_model_load: ggml ctx size = 17928. Upload images, audio, and videos by dragging in the text input, pasting, or clicking here. 21-05-2023: v1. tokenizer = AutoTokenizer. bin file, which you can then use with the gpt-j program. 48 MB GGML_ASSERT: ggml. This is the same model as SantaCoder but it can be loaded with transformers >=4. GGML - Large Language Models for Everyone: a description of the GGML format provided by the maintainers of the llm Rust crate, which provides Rust bindings for GGML; marella/ctransformers: Python bindings for GGML models. Welcome to KoboldCpp - Version 1. This is a C++ example running 💫 StarCoder inference using the ggml library. StarCoder models can be used for supervised and unsupervised tasks, such as classification, augmentation, cleaning, clustering, anomaly detection, and so forth. Replit. This repo is the result of quantising to 4bit, 5bit and 8bit GGML for CPU inference using ggml. txt","contentType. 1. Hugging Face and ServiceNow released StarCoder, a free AI code-generating system alternative to GitHub’s Copilot (powered by OpenAI’s Codex), DeepMind’s AlphaCode, and Amazon’s CodeWhisperer. Home of StarCoder: fine-tuning & inference! Contribute to bigcode. txt","contentType":"file. Add To Compare. 0 model slightly outperforms some closed-source LLMs on the GSM8K, including ChatGPT 3. Resources ; GGML - Large Language Models for Everyone: a description of the GGML format provided by the maintainers of the llm Rust crate, which provides Rust bindings for GGML ; marella/ctransformers: Python bindings for GGML models. . cpp. It consists of programming challenges. 8 --repeat_last_n 64 --repeat_penalty 1. metallicamax • 6 mo. StarCoder is essentially a generator that combines autoencoder and graph-convolutional mechanisms with the open set of neural architectures to build end-to-end models of entity-relationship schemas. Model card Files Files and versions Community 8 Train Deploy Use in Transformers. gitattributes. llama-cpp (GGUF/GGML); LLaMa 2; Dolly v2; GPT2; GPT J; GPT NEO X; MPT; Replit; StarCoder. The model has been trained on more than 80 programming languages, although it has a particular strength with the. TGI enables high-performance text generation using Tensor Parallelism and dynamic batching for the most popular open-source LLMs, including StarCoder, BLOOM, GPT-NeoX, Llama, and T5. Project description. txt # Convert HF model to ggml python. Initial GGML model commit 3 months ago. GPT4All Chat UI. from_pretrained ("gpt2") # Load tokenizer from original model repo. . Starcode clustering is based on all pairs search within a specified Levenshtein distance (allowing insertions and deletions), followed by a clustering algorithm: Message Passing, Spheres or Connected Components. 👍. This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. cpp / ggml-opencl. Inference on my M1 Mac for Starcoder is almost impossibly slow. In the ever-evolving landscape of code language models, one groundbreaking development has captured the attention of developers and researchers alike—StarCoder. py. main_custom: Packaged. Any attempts to make my own quants have failed using the official quantization scripts. NameError: Could not load Llama model from path: C:UsersSiddheshDesktopllama. But luckily it saved my first attempt trying it. The ESP32 series employs either a Tensilica Xtensa LX6, Xtensa LX7 or a RiscV processor, and both dual-core. Closed. starcoderbase-GGML. Table of Contents Model Summary; Use; Limitations; Training; License; Citation; Model Summary Starcoder GGML files are model files for Bigcode's Starcoder, a text generation model trained on 80+ programming languages. LM Studio supports any ggml Llama, MPT, and StarCoder model on Hugging Face (Llama 2, Orca, Vicuna, Nous Hermes, WizardCoder, MPT, etc. This repo is the result of quantising to 4bit, 5bit and 8bit GGML for CPU inference using ggml. Bigcode's Starcoder GGML These files are GGML format model files for Bigcode's Starcoder. No matter what command I used, it still tried to download it. The model uses Multi Query. LFS. You switched accounts on another tab or window. GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. devops","contentType":"directory"},{"name":". bin, which is about 44. It is integrated into LangChain. You'll need around 4 gigs free to run that one smoothly. editorconfig","contentType":"file"},{"name":"ggml-vocab. 1. We would like to show you a description here but the site won’t allow us. chk and params. cpp. Closed camenduru started this conversation in Show and tell. go-skynet/go-ggml-transformers. starcoder-GGML This is GGML format quantised 4bit, 5bit and 8bit models of StarCoder. Thanks ! These files are not compatible with llama. StarCoder大模型详细介绍. ,2022), a large collection of permissively licensed GitHub repositories with in- koboldcpp. ctransformers supports those, plus also all the models supported by the separate ggml library (MPT, Starcoder, Replit, GPT-J, GPT-NeoX, and others) ctransformers is designed to be as close as possible a drop-in replacement for Hugging Face transformers, and is compatible with LlamaTokenizer, so you might want to start. #134 opened Aug 30, 2023 by code2graph. Minotaur 15B 8K. Drop-in replacement for OpenAI running on consumer-grade hardware. 0-GGML. HF models can now be converted to ggml, making big code simpler. The table below lists all the compatible models families and the associated binding repository. Drop-in replacement for OpenAI running on consumer-grade. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. TheBloke/starcoder-GGML. cu Go to file Go to file T; Go to line L; Copy path Copy permalink; This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. Original model card Play with the model on the StarCoder Playground. 14. Note: The reproduced result of StarCoder on MBPP. Being able to train and fine-tune LLMs at a lower cost than LLaMa models and enable commercial usage using llama. pyllamacpp-convert-gpt4all path/to/gpt4all_model. 与LLaMA类似,我们为1万亿个代币训练了一个~15B的参数模型。. Text Generation •. Text Generation • Updated Jun 9 • 13 • 21 TheBloke/WizardLM-Uncensored-Falcon-40B-GGML. cpp quantized types. txt","path":"examples/gpt-j/CMakeLists. txt","path":"examples/starcoder/CMakeLists. We fine-tuned StarCoderBase model for 35B Python. TheBloke/guanaco-33B-GGML. Then create a new virtual environment: cd llm-gpt4all python3 -m venv venv source venv/bin/activate. The GPT4All Chat UI supports models from all newer versions of llama. go-skynet goal is to enable anyone democratize and run AI locally. bin. Copilot is a service built upon OpenAI’s Codex model; Codex itself is an offshoot of GPT-3, OpenAI’s groundbreaking text-generating AI. 98 MB q5_0First of all, thank you for your work! I used ggml to quantize the starcoder model to 8bit (4bit), but I encountered difficulties when using GPU for inference. 5 which found the flaw, an usused repo, immediately. Not all transformer models are supported in llamacpp, so if it’s something like Falcon or Starcoder you need to use s different library. metallicamax • 6 mo. . You can click it to toggle inline completion on and off. txt","contentType. ialacol (pronounced "localai") is a lightweight drop-in replacement for OpenAI API. mpt: ggml_new_tensor_impl: not enough space in the context's memory pool ggerganov/ggml#171. Typically, a file containing a set of DNA sequences is passed as input,. cpp, gptneox. Reload to refresh your session. numpy returns a numpy view over a ggml tensor; if it's quantized, it returns a copy (requires allow_copy=True) The newest update of llama. starcoder is good. It uses the same architecture and is a drop-in replacement for the original LLaMA weights. StarCoder is part of the BigCode Project, a joint effort of ServiceNow and Hugging Face. 1. This change now also allows to keep the model data in VRAM to speed-up the inference. Deprecated warning during inference with starcoder fp16. If running StarCoder (starchatalpha), it does not stop when encountering the end token and continues generating until reaching the maximum token count. cpp, text-generation-webui or llama-cpp-python. cpp with GGUF models including the Mistral,. cpp with GGUF models including the Mistral,. from_pretrained ('marella/gpt-2-ggml') If a model repo has multiple model files (. Learn More Update Features. Saved searches Use saved searches to filter your results more quicklyRuns ggml, gguf, GPTQ, onnx, TF compatible models: llama, llama2, rwkv, whisper, vicuna, koala, cerebras, falcon, dolly, starcoder, and many others api kubernetes bloom ai containers falcon tts api-rest llama alpaca vicuna guanaco gpt-neox llm stable-diffusion rwkv gpt4allCheck if the OpenAI API is properly configured to work with the localai project. Akin to and , as well as open source AI-powered code generators like , and , Code Llama can complete code and debug existing code across a range of programming languages, including Python, C++. edited May 24. StarCoder. Memory footprint: 15939. A small difference in prompt can cause a big difference in results. api kubernetes bloom ai containers falcon tts api-rest llama alpaca vicuna guanaco gpt-neox llm stable. Hugging Face and ServiceNow have partnered to develop StarCoder, a new open-source language model for code. 2) (1x) A Wikipedia dataset that has been upsampled 5 times (5x) It's a 15. Code Issues Pull requests Discussions 🤖 Refact AI: Open-Source Coding Assistant with Fine-Tuning on codebase, autocompletion, code refactoring, code analysis, integrated chat and more! refactoring chat ai autocompletion. LoLLMs-WebUI a web UI which supports nearly every backend out there. q4_2. The program can run on the CPU - no video card is required. {"payload":{"allShortcutsEnabled":false,"fileTree":{"models":{"items":[{"name":". Repositories available 4-bit GPTQ models for GPU inference New: Wizardcoder, Starcoder, Santacoder support - Turbopilot now supports state of the art local code completion models which provide more programming languages and "fill in the middle" support. 3 GB. It assumes a typed Entity-relationship model specified in human-readable JSON conventions. Model Details The base StarCoder models are 15. Deprecated warning during inference with starcoder fp16. 2), with opt-out requests excluded. txt","contentType":"file. starcoder -- not enough space in the context's memory pool ggerganov/ggml#158. This model was trained with a WizardCoder base, which itself uses a StarCoder base model. txt","contentType":"file. 20 Rogerooo • 5 mo. Now install the dependencies and test dependencies: pip install -e '. editorconfig","contentType":"file"},{"name":"ggml-vocab. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. I think it would be good to pre-allocate all the input and output tensors in a different buffer. However, most existing models are solely pre-trained on extensive raw code data without instruction fine-tuning. StarCoder and comparable devices were tested extensively over a wide range of benchmarks. cpp, a C++ implementation with ggml library. cpp <= 0. Its training data incorporates more that 80 different programming languages as well as text extracted from GitHub issues and commits and from notebooks. Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related Tasks. bin now you can add to :You signed in with another tab or window. Uh, so 1) SalesForce Codegen is also open source (BSD licensed, so more open than StarCoder's OpenRAIL ethical license). Go-skynet is a community-driven organization created by mudler. Discuss code, ask questions & collaborate with the developer community. Not all ggml models are compatible with llama. As for when - I estimate 5/6 for 13B and 5/12 for 30B. cpp, or currently with text-generation-webui. 48 Code to reproduce erro. Copied to clipboard. 8% pass@1 on HumanEval is good, GPT-4 gets a 67. {"payload":{"allShortcutsEnabled":false,"fileTree":{"examples/mpt":{"items":[{"name":"CMakeLists. The language model’s latest iteration, CodeGen2. Hey! Thanks for this library, I really appreciate the API and simplicity you are bringing to this, it's exactly what I was looking for in trying to integrate ggml models into python! (specifically into my library lambdaprompt. The model uses Multi Query Attention, a context window of 8192 tokens, and was trained using the Fill-in-the-Middle objective on 1 trillion tokens. The Hugging Face team also conducted an experiment to see if StarCoder could act as a tech assistant in addition to generating code. Text Generation • Updated Jun 9 • 13 • 21 TheBloke/WizardLM-Uncensored-Falcon-40B-GGML. TheBloke/llama2_70b_chat_uncensored-GGML. StarCoder和StarCoderBase是基于GitHub许可数据训练的大型代码语言模型(CodeLLM),包括80多种编程语言、Git提交、GitHub问题和Jupyter笔记本。. . txt","contentType":"file. In the prompt folder make the new file called alpacanativeenhanced. CodeGen2. Please see below for a list of tools known to work with these model files. bin models. starcoder_model_load: ggml ctx size = 2215. TheBloke/starcoder-GGML. ggml. Compatible models. csv in the Hub. In this way, these tensors would always be allocated and the calls to ggml_allocr_alloc and ggml_allocr_is_measure would not be necessary. See the optimized performance of chatglm2-6b and llama-2-13b-chat models on 12th Gen Intel Core CPU and Intel Arc GPU below. main WizardCoder-15B-1. Cannot retrieve. Original model card: Eric Hartford's WizardLM 13B Uncensored. In this video, we review WizardLM's WizardCoder, a new model specifically trained to be a coding assistant. . cpp: The development of LM Studio is made possible by the llama. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a. This repo is the result of quantising to 4bit, 5bit and 8bit GGML for CPU inference using ggml. GPTQ quantization is a state of the art quantization method which results in negligible output performance loss when compared with the prior state of the art in 4-bit (. TheBloke/starcoder-GGML. And if it’s Llama2 based, i think there’s soldering about the file path structure that needs to indicate the model is llama2. Text Generation • Updated Sep 27 • 1. 3 -p. The GPT4All Chat Client lets you easily interact with any local large language model. In particular, the model has not been aligned to human preferences with techniques like RLHF, so may generate. main: Uses the gpt_bigcode model. GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. edited. Backend and Bindings. hash sum indicates the ggml version used to build your checkpoint. Featuring robust infill sampling , that is, the model can “read” text of both. It seems to be a llama. Features ; 3 interface modes: default (two columns), notebook, and chat ; Multiple model backends: transformers, llama. We’re on a journey to advance and democratize artificial intelligence through open source and open science. But for the GGML / GGUF format, it's more about having enough RAM. 1 to use the GPTBigCode architecture. cpp to run the model locally on your M1 machine. . txt","path":"examples/starcoder/CMakeLists. how to add the 40gb swap? am a bit of a noob sorry. 2023-07-12: Sadly, it appears that replit-code-instruct-glaive's extremely strong HumanEval performance may. KoboldCpp is an easy-to-use AI text-generation software for GGML and GGUF models. Python from scratch. Load other checkpoints We upload the checkpoint of each experiment to a separate branch as well as the intermediate checkpoints as commits on the branches. 5B parameter Language Model trained on English and 80+ programming languages. bin' (bad magic) GPT-J ERROR: failed to load. {StarCoder: may the source be with you!}, author={Raymond Li and Loubna Ben Allal and Yangtian Zi and Niklas Muennighoff and Denis Kocetkov. Using pre-trained language models to resolve textual and semantic merge conflicts (experience paper) ISSTA (C) 2021-7. Testing. Hugging Face. 2), with opt-out requests excluded.