Learn to leverage GenAI in your apps in an afternoon

PLUS: Add an AI Junior Dev to Your Team

DevThink.AI

Essential AI Topics for Software Devs, Minus the Hype

In this edition

  • Learn to leverage GenAI in your apps in an afternoon

  • A few Frameworks for ML Cloud deployments

  • Latests AI tool and trends for software developers

🔎 Learn to leverage GenAI in your apps in an afternoon

Are you a software developer looking to integrate generative AI into your projects but uncertain about where to begin? You're definitely not alone in this dilemma as the hype surrounding generative AI can be overwhelming.

To help get you started, I've created a straightforward 3-step program that can be completed in just one afternoon. The goal is to provide you with a strong foundation to kickstart your generative AI journey, upon which you can continue building and expanding your skills.

Step 1) Some history and high level overview

History: [10 min read] In A Brief History of Generative AI, Matt White traces the history of Generative AI from 1960 to present day.

What is Generative AI: [10 min read] In this article from NVidia, you will learn the broader classifications of Generative AI along with some of the common use cases of applying it.

Step 2) Interacting with LLMs effectively

Prompt Engineering Guide [45 min read]
Prompt engineering is the process of carefully crafting the input prompts that are provided to large language models (LLMs) in order to get high-quality, targeted responses from the models.

This site covers techniques like zero-shot prompting, chain-of-thought prompting, model-aided generation, graph prompting and more. It also discusses risks, papers, tools, datasets and courses on prompt engineering.

Step 3) Building applications which utilize models

Build a simple chat app: [45 min hands on tutorial]
This tutorial teaches you how to build an AI-powered chatbot app using an open-source, no-cost LLM model and Streamlit. In a few lines of code, you learn to create a HuggingFace ChatBot via HugChat API and deploy the chatbot as a Streamlit app on the Streamlit Community Cloud.

Get to know LangChain: [1 hr video tutorial]
LangChain provides a set of tools to simplify building applications that leverage large language models.
This course teaches you how to use the LangChain framework to expand the capabilities of large language models for application development. You will learn how to call and parse LLM responses, use memories to manage context, create chains of operations, apply LLMs to your own data, and use LLMs as reasoning agents.

What’s next?

Build, build, build

There is a ton of resources out there to continue learning. Choose your medium and explore.

☁️ A Few Frameworks for ML Cloud deployments

AI Data Platform for Data Scientists
Exspanse is an integrated development platform that offers over 50 AI services, pre-trained models, APIs, SDKs, and visual tools to help developers integrate AI into software applications. The platform assists with data ingestion, normalization, and enrichment, and supports federated learning to improve models over time. It is an ideal end-to-end solution for organizations looking to build, deploy, and manage AI-powered applications.

Simplify Your ML Workflow with Replicate
At Replicate, they're making machine learning less of a headache. With just a few lines of code, you can run open-source ML models in the cloud. They offer a vast collection of ready-to-use models for language, video creation, image restoration, and more. They've also introduced Cog, a tool to package your ML models into production-ready containers. Replicate then lets you deploy these models at scale, with automatic API generation and flexible scaling.

Simplify LLM development across multiple clouds
dstack is an open source tool that simplifies creating and deploying large language models across multiple clouds. It streamlines model development, scales down cloud costs, and allows users freedom from vendor lock-in. Set up dev environments, train or serve models using an easy CLI. Get started in under a minute.

🧰 Software Dev AI tools and trends

Add an AI Junior Dev to Your Team
Sweep is an AI tool that can act as a junior developer, fixing simple bugs and implementing small features directly within GitHub pull requests. After installing the Sweep GitHub app on a repository, developers can simply create an issue providing details of the requested change. Sweep then reads the codebase, plans the changes, and opens a pull request with code changes.

LlamaIndex introduces Data Agents

LlamaIndex has launched Data Agents, which are LLM-powered knowledge workers that can perform various tasks over data. They are capable of automated search and retrieval over different types of data, calling any external service API, storing conversation history, and fulfilling both simple and complex data tasks. LlamaIndex has also provided abstractions, services, and guides on both the agents side and tools side in order to build data agents.

Taming Kubernetes with AI
Generative AI is being used to automate many aspects of Kubernetes development and deployment, from writing code to optimizing cluster resources. This is helping to reduce the complexity of Kubernetes and make it more accessible to a wider range of developers.

JetBrains bringing an in house developed AI assistant to their IDEs
Many third parties have made their AI code assistants available as a Jetbrains plugin. Now Jetbrains has announced availability of a code assistant developed internally.
The new AI Assistant features will help developers with tasks such as code completion, documentation generation, and troubleshooting. The AI Assistant is currently available in the EAP builds of IntelliJ IDEA, PyCharm, and other JetBrains IDEs. It will be released in the stable versions of these IDEs in the coming months.

LangSmith brings Light to LLM Blackboxes
LangChain announces, LangSmith a new platform for debugging, testing, evaluating and monitoring LLMs . This tool helps developers move LLMs from prototypes to production by providing visibility into model inputs/outputs, evaluation metrics, cost/latency tracking and more.

Prompt Engineering Guide for Achieving Effective Use of AI Models
The Prompt Engineering Guide covers theories, practices, examples and techniques for optimizing prompts that interface with LLMs for various use cases.
The guide discusses fundamentals like prompt elements and general design tips, alongside techniques like zero-shot, few-shot and chain-of-thought prompting. It also goes over applied uses for program-aided LLMs and generating data and code, highlighting tools and datasets.

TypeChat: A Tool For Programming Assistants
Microsoft has released an experimental library called TypeChat that aims to bridge natural language and code by using TypeScript types. The library uses a language model's completions to generate structured JSON responses defined by the TypeScript types. This approach allows software to validate and correct the model's responses. TypeChat currently integrates with OpenAI and Azure AI models, though the authors note it should work with any chat completion API. Developers can use TypeChat to build programming assistants by defining API schemas.

Are Open Weights Truly Open Source?
The recent release of Meta's LLaMA2 language model sparked debate over whether large AI models with open weights can truly be considered "open source". In this post, Alessio Fanelli explores the history and evolving definition of open source software and language models.

Thanks for reading and we will see you next time

Follow me on twitter, DM me links you would like included in a future newsletter.