The Hugging Face Transformer is an immensely popular library in Python, which provides pre-trained models that are extraordinarily useful for a variety of Natural Language Processing tasks. It supported just PyTorch previously but, as of now, supports Tensorflow as well. Nous-Hermes-Llama2-70b is the NLP language model that uses over lakhs of instructions. This model uses the same dataset as the old Hermes model to ensure that there are no severe wide changes while training the model, and the process becomes even smoother. The model still has some deficits, like a lower hallucination rate and the absence of OpenAI censorship.
The model training was done on the larger datasets, which were incredibly high in terms of data that was processed and the style they had. The data was found from different sources and merged into a single dataset, resulting in a diversity of knowledge in the processed dataset. The dataset collected data from different sources like Teknium, Karan4D, Emozilla, Huemin Art, and Pygmalion AI. The model is trained using the Alpaca model. The research team conducted a human evaluation on the inputs from the self-instruct evaluation dataset to evaluate Alpaca. The researchers collected this evaluation set and covered a diverse list of user-oriented instructions that covered almost everything.
Researchers also stated that the Prompt Engineers would also benefit from this model that had been executed. Researchers believe that releasing the above assets will enable the academic community to perform control scientific studies on instruction following language models and ultimately result in new techniques to address the existing deficiencies within this model. Deploying an interactive demo for Alpaca also poses potential risks, such as more widely disseminating harmful content and lowering the chances for spam. Spam Detection technique in NLP also plays an important role in this model. Researchers understand that these mitigation measures can be achieved once we release the model weights or if user train their instruction following the model.
The future plans of this project also include iterating high-quality data and applying techniques to remove the lower-quality data going forward. Researchers also need to evaluate Alpaca more rigorously. They will further start with the HELM model, which hopefully will capture more generative information. Researchers would also like to study the risks of Alpaca and would try to further improve its safety.
Check out the Project Page. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
The post Meet Nous-Hermes-Llama2-70b: A State-of-the-Art Language Model Fine-Tuned on Over 300,000 Instructions appeared first on MarkTechPost.
The Hugging Face Transformer is an immensely popular library in Python, which provides pre-trained models that are extraordinarily useful for a variety of Natural Language Processing tasks. It supported just PyTorch previously but, as of now, supports Tensorflow as well. Nous-Hermes-Llama2-70b is the NLP language model that uses over lakhs of instructions. This model uses
The post Meet Nous-Hermes-Llama2-70b: A State-of-the-Art Language Model Fine-Tuned on Over 300,000 Instructions appeared first on MarkTechPost. Read More AI Shorts, Applications, Artificial Intelligence, Editors Pick, Machine Learning, Staff, Tech News, Technology, Uncategorized