- DevThink.AI newsletter
- Posts
- Exploring LLM Applications with Ray Project
Exploring LLM Applications with Ray Project
PLUS - Web AI: Deep Learning in Your Browser
Essential AI Content for Software Devs, Minus the Hype
Here is this week’s curated list of Generative AI resources relevant for software developers. If you find this useful, I really appreciate if you share it with others.
In this edition:
📖 TUTORIALS & CASE STUDIES
🧰 TOOLS
📰 NEWS
📖 TUTORIALS & CASE STUDIES
Exploring LLM Applications with Ray Project
read time: 10 minutes
The Ray Project provides a comprehensive guide on leveraging Large Language Models (LLMs) for various applications. It offers practical examples and code snippets to help developers understand and implement LLMs in their projects.
Top Generative AI Courses and Training Resources
read time: 15 minutes
This article provides a comprehensive list of top 10 generative AI courses and training resources. It covers a range of resources from free to paid, for beginners to advanced learners, and includes platforms like Coursera, EdX, and Udemy. It also discusses the future of AI training, emphasizing on interactive tutorials and real-world use cases.
A Comprehensive Guide for Building RAG-based LLM Applications (Part 1)
read time: 15 minutes
This comprehensive guide provides step-by-step instructions on how to build retrieval augmented generation (RAG) based large language model (LLM) applications. It covers topics such as developing the application from scratch, scaling workloads, evaluating performance, implementing LLM hybrid routing, serving the application, and analyzing the impact of LLM applications. The guide also explores the limitations of base LLMs and how RAG-based applications address those limitations. Read more
GPT 3.5 vs Llama 2: A Comparative Study on Fine-Tuning
read time: 15 minutes
This blog post provides a comprehensive comparison of fine-tuning GPT 3.5 and Llama 2 on SQL and functional representation tasks. While GPT 3.5 performs slightly better, it costs 4-6x more to train and deploy, making Llama 2 a cost-effective alternative for developers.
Deploying Generative AI Models on Amazon EKS
read time: 20 minutes
This post provides a comprehensive guide on deploying Generative AI models on Amazon Elastic Kubernetes Service (EKS). The teraform template is provided so you can take that as a starting point for your own work..
🧰 TOOLS
Web AI: Deep Learning in Your Browser
read time: 3
Web AI is a TypeScript library enabling deep learning models to run directly in the web browser or Node.js. It simplifies AI integration into web applications, eliminating the need for complex server-side infrastructure or third-party APIs.
Vanna: Open-Sourcing AI-Powered SQL Generation
read time: 8 minutes
Vanna, an AI-powered SQL generation tool, is now open-source. It offers high accuracy rates, custom training, and security. The open-source version allows local deployment, customization, and aims to establish a standard for AI-generated SQL. Users can contribute via Github to enhance the tool.
ExLlamaV2: A New Inference Library for Local LLMs
read time: 15 minutes
ExLlamaV2 is a new inference library for running local LLMs on modern consumer GPUs. It's faster and more versatile than its predecessor, with support for a new quant format. The library is still in its early stages and requires further testing and tuning.
Brainglue: A Playground for Creative Prompt Chaining
read time: 3
Brainglue is an AI playground for crafting and implementing prompt chains, enhancing AI reasoning. It offers a user-friendly interface, a template gallery, and an easy-to-integrate API, making the transition from experimentation to real-world applications seamless.
ModuleFormer: A New Efficient and Extendable MoE-based Architecture
read time: 10 minutes
ModuleFormer is a new MoE-based architecture that offers efficiency, extendability, and specialization. It only activates a subset of its experts for each input token, making it more efficient than dense LLMs. It can be easily extended with new experts to learn new knowledge and can specialize a subset of experts to the finetuning task.
InstructGPT: A Safer, More Aligned Language Model
read time: 15 minutes
InstructGPT, OpenAI's new language model, outperforms GPT-3 in following instructions and generating less toxic outputs. Trained using reinforcement learning from human feedback, it's now the default model on OpenAI's API, demonstrating the potential of human-in-the-loop fine-tuning for improving AI safety and reliability.
📰 NEWS & EDITORIALS
Revolutionizing DevOps with Generative AI
read time: 8 minutes
This article explores how Generative AI (GenAI) can transform DevOps, from planning to deployment. GenAI can assist in writing user stories, perform impact analysis, automate code generation, produce unit tests, and even write release notes. However, trust in GenAI's output and security are crucial for its successful implementation
AI Reshaping Work: Centaurs and Cyborgs on the Jagged Frontier
read time: 15 minutes
This study reveals that consultants using ChatGPT-4 significantly outperformed those who did not, completing more tasks, faster, and with higher quality. However, over-reliance on AI can lead to errors. The key to effective AI use lies in the 'Centaur' or 'Cyborg' approaches, strategically dividing or integrating tasks between human and AI.
Why Open Source AI Will Triumph
read time: 15 minutes
This article argues against the notion that AI will be dominated by a few large language model providers. It emphasizes the importance of open source AI, stating that it offers more control, privacy, security, and adaptability, and is critical for AI native businesses. The author believes that despite current hype around closed source AI, open source AI will eventually prevail.
Securing AI: The Emerging Platform Opportunity
read time: 15 minutes
This article discusses the security challenges and opportunities presented by generative AI. It highlights the rise of a new category of platforms to secure AI, the AI threat landscape, and the need for new tools to protect the security stack of generative AI.
Anyscale Optimizes Open-Source AI Deployments with Endpoints
read time: 8 minutes
Anyscale, the lead commercial vendor behind the open-source Ray framework, has announced the general availability of Anyscale Endpoints. This service enables organizations to fine-tune and deploy large language models (LLMs) easily. Anyscale is also expanding its partnership with Nvidia to optimize Nvidia's software for inference and training on the Anyscale Platform.
Thanks for reading and we will see you next time
Follow me on twitter, DM me links you would like included in a future newsletters.