- DevThink.AI newsletter
- Posts
- Building Knowledge Graphs from Text Data with LangChain
Building Knowledge Graphs from Text Data with LangChain
PLUS - Apple's Ambitious Leap into Generative AI
Essential AI Content for Software Devs, Minus the Hype
In this edition we have some thorough and well written tutorials plus no shortage of new and releases for the ever growing population of AI companies.
In this edition
📖 TUTORIALS & CASE STUDIES
Building Knowledge Graphs from Text Data with LangChain
read time: 10 minutes
This tutorial guides you through building a Knowledge Graph from text data using LangChain and OpenAI's GPT-3.5. It covers the installation and usage of necessary packages, setting up API keys, defining prompts, initializing chains, and visualizing the built Knowledge Graph.
Master Prompt Engineering with Llama 2
read time: 5 minutes
DeepLearning.AI offers a short course on prompt engineering with Llama 2 models. The course covers best practices for prompting, interaction with various Llama 2 models, and building safe AI applications. It's free for a limited time during the platform's beta phase.
Building a Chat App with LangChain, LLMs, and Streamlit for Complex SQL Database Interaction
read time: 20 minutes
This article provides a detailed guide on building a chat application using LangChain, Large Language Models (LLMs), and Streamlit. The application allows complex interaction with a SQL database, demonstrating the potential of generative AI in simplifying database queries.
Chatting With Your Data Ultimate Guide
read time: 10 minutes
This ultimate guide provides step-by-step instructions on how to build a chatbot that can chat with your personal data. It covers retrieval augmented generation, document loading and splitting, creating embeddings, and using different retrieval techniques. The guide also includes code examples and a video series for those who prefer visual content. Read more here.
🧰 TOOLS
Introducing Mistral Large: A New Flagship Language Model
read time: 10 minutes
Mistral AI has released Mistral Large, a cutting-edge text generation model with top-tier reasoning capabilities. It excels in multilingual tasks and code generation. Available through la Plateforme and Azure, it's the second-ranked model generally available through an API. A smaller, latency-optimized model, Mistral Small, has also been launched.
Introducing DSPy: A Powerful Framework for Optimizing Language Model Prompts
read time: 15 minutes
Stanford NLP introduces DSPy, a framework for optimizing language model prompts and weights within a pipeline. It separates program flow from parameters, introduces new optimizers, and can teach models like GPT-3.5 and GPT-4 to be more reliable. It offers a systematic approach to solving hard tasks with language models.
Github Awesome-List for LLMs for Video Understanding
read time: 15 minutes
A comprehensive awesome-list of well regarded projects for Large Language Models (LLMs) for video understanding, including models, tasks, datasets, and benchmarks.
Introducing StarCoder2: A Powerful AI for Code Generation
read time: 10 minutes
StarCoder2 is a 15B parameter model trained on 600+ programming languages, capable of generating code snippets. It was trained using the NVIDIA NeMoâ„¢ Framework and the NVIDIA Eos Supercomputer. However, the generated code may contain bugs or inefficiencies. Learn more about StarCoder2 here.
ShieldLM: A New Safety Detector for Large Language Models
read time: 8 minutes
ShieldLM, a bilingual safety detector for Large Language Models (LLMs), has been introduced in a new paper. It aligns with human safety standards, supports customizable detection rules, and provides explanations for its decisions. ShieldLM outperforms strong baselines across various test sets and can be used without crafting prompts. It also allows for custom detection rules, making it a versatile tool for various application scenarios.
📰 NEWS & EDITORIALS
Google Confronts Bias in Gemini AI: Embarrassing Mistakes Spark Urgent Calls for Regulation
Read Time: 5 minutes
Google was forced to disable Gemini's image generator after the AI displayed glaring biases, drawing Nazi soldiers and Popes with diverse ethnicities and genders. The incident highlights the unreliability of generative AI – even from tech giants like Google – due to biased training data and inherent limitations in the technology. Experts call for greater scrutiny and potential regulation of generative AI tools. Learn more about Google's struggle with AI bias in this article.
Meta Unveils LLaMA 3: A Powerful AI for Handling Controversial Queries
Read Time: 2 minutes
Meta AI's newest Large Language Model, LLaMA 3, promises sophisticated responses to complex questions, even controversial ones. The company aims to distinguish itself from rivals like Google Gemini by addressing context and nuance, avoiding inaccurate or misleading responses. The release of LLaMA3 is expected in July.
Apple's Ambitious Leap into Generative AI
read time: 3 minutes
Apple CEO Tim Cook has hinted at significant advancements in generative AI from the company, likely to be unveiled at WWDC in June with iOS 18. Despite the competition from OpenAI's ChatGPT and Google's Gemini, Apple's system-level access could supercharge its AI capabilities. Read more about Apple's AI plans here.
Thanks for reading and we will see you next time
Follow me on twitter, DM me links you would like included in a future newsletters.