OpenAI Foundry, Hugging Face with AWS, and Roblox
Here are your weekly articles, guides, and news about NLP and AI chosen for you by NLPlanet!
- OpenAI Foundry will let customers buy dedicated compute to run GPT3 and their other models. It will soon be possible to have dedicated GPT3 models.
- Hugging Face and AWS partner to make AI more accessible. Making Generative AI models more open-source.
- How AI Can Help Create and Optimize Drugs To Treat Opioid Addiction. It could save thousands of lives every year.
- Roblox Is Bringing Generative AI to Its Gaming Universe. The company aims to draw on the new technology’s code-writing ability to make its digital worlds even more customizable.
- How chatbots can change journalism. Or not. Experiments refining newspaper articles with Claude, the chatbot from Anthropic.
- Google Research on Natural sciences. Advances in physics, biology, and natural sciences.
- MIT course on Introduction to Data-Centric AI. A free and open course that focuses on data quality.
- Lessons learned while using ChatGPT in education. Experiments from a teacher giving homeworks with ChatGPT.
- Writing Essays With AI: A Guide. Advice on how to incorporate generative AI in your writing process.
- Text-to-Image Diffusion Models: A Guide for Non-Technical Readers. A simple explanation of how text-conditioned diffusion models work.
- Overcoming The Limitations Of Large Language Models. Ideas to complement the intelligence of LLMs.
- Pretraining Language Models with Human Preferences. Exploring alternative objectives for pretraining LMs in a way that also guides them to generate text aligned with human preferences.
- How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation. GPT models achieve very competitive translation quality for high resource languages, while having limited capabilities for low resource languages.
- The Wisdom of Hindsight Makes Language Models Better Instruction Followers. An alternative approach to RLHF: converting feedback to instruction by relabeling the original one and training the model for better alignment in a supervised manner.
- Scaling Vision Transformers to 22 Billion Parameters. From 4B to 22B parameters, ViT demonstrates increasing performance with scale.
- Zero-Shot Information Extraction via Chatting with ChatGPT. Using ChatGPT for entity-relation triple extraction, named entity recognition, and event extraction.
- Aligning Text-to-Image Models using Human Feedback. RLHF for text-to-image models.
- LLaMA: A repository for Open and Efficient Foundation Language Models. A collection of foundation language models ranging from 7B to 65B parameters by Meta.