Category/Tag: Tutorials
Haystack Streaming Text Generation
- By Bruce Nielson
- ML & AI Specialist
In this post, we're going to give our sample Retrieval Augmented Generation (RAG) Pipeline a bit of a makeover. Specifically, we’ll tweak it to “stream” results from multiple nodes, letting us show off what we’ve got as soon as it rolls in—no more waiting for everything to pile up at the end.
Avoiding Text Truncations in RAG
- By Bruce Nielson
- ML & AI Specialist
In our series of articles on AI, we have been exploring cost-effective ways of building AI and ML models ourselves, helping anyone get into AI right now. However, a downside of the sentence embedder model we have been using is a tendency for text to get truncated, reducing overall performance of the RAG system. In today's article, we will explore how to overcome that problem.
Using Hugging Face API Generators for RAG
- By Bruce Nielson
- ML & AI Specialist
Want to take advantage of AI and ML, but don't want to break the bank while doing so? Mindfire TECH is eager to help you and your business enter the future. Part of that includes our ongoing free series of tutorials and demos showcasing how you can get started with AI and ML right now. In this one, join Bruce Nielson as he explored how to use Hugging Face Generators for RAG.
Google AI Integration with Haystack
- By Bruce Nielson
- ML & AI Specialist
AI and LLMs like ChatGPT can be quite expensive, even when you are paying a small amount per token it can still add up to big dollars very quickly. Mindfire TECH is committed to providing cost efficient Artificial Intelligence solutions to our customers, and in today’s blog we’ll explore one of the ways to get into AI without breaking the bank. The solution: Google Gemini.
Retrieval Augmented Generation with Haystack and pgvector - Part 2
- By Bruce Nielson
- ML & AI Specialist
Continuing on from last week's article on Retrieval Augmented Generation (RAG for short), Bruce Nielson continues to expand on his series of AI and ML articles with a Part 2 on RAG. If you're interested in learning the ins and outs of how AI works, look no further than Bruce's ongoing series of tutorials and demos.
Retrieval Augmented Generation with Haystack and pgvector
- By Bruce Nielson
- ML & AI Specialist
Interested in learning more about how ML actually works? Look no further than our continuing series of tutorials and demos on ML and AI, including this blog post by Bruce Nielson, where he continues breaking down how Retrieval Augmented Generation (RAG) works with Haystack and pgvector.
Getting Started with Stable Diffusion: A Beginner's Guide
- By Bruce Nielson
- ML & AI Specialist
Most people who know about AI have heard of Stable Diffusion, one of the leading AI models in text-to-image generation. In today's article our in-house expert on ML and AI, Bruce Nielson, will walk you through the steps of setting up a working version of Stable Diffusion right on your own laptop.
Google Gemma Demo: Setting up a LLM with Text Streaming
- By Bruce Nielson
- ML & AI Specialist
In a previous post, we introduced Google's Gemma, but didn't dive very deep into it. In this new post by Bruce Nielson, we will showcase exactly how to setup a LLM with Gemma that can work on your laptop.
Writing a Custom Haystack Pipeline Component
- By Bruce Nielson
- ML & AI Specialist
In a previous post, we used Haystack with pgvector to create a PostgreSQL document store. In this post we’re going to go over how to create a custom Haystack pipeline component that contains our custom programming. This will allow us to then implement a Haystack pipeline for our EPUB document converter.
Environment Setup for RAG using Python, Haystack, PostgreSQL, pgvector, and Hugging Face
- By Bruce Nielson
- ML & AI Specialist
This post is your one stop shop for how to setup your environment to follow my blog posts on your local machine to do Retrieval Augmented Generation (RAG) using Haystack, PostgreSQL, and pgvector along with Hugging Face for the open-source Large Language Model.