Last time in our series of AI tutorials, we covered how to install Neo4J (graph database). For the Mindfire stack, we’re aiming to stick with open-source software, and Neo4j Server Community Edition fits the bill. However, there’s also a free Enterprise edition available (with some limitations) if you use Neo4J Desktop, which we'll be looking at today.
In previous posts, we learned how to install PosgreSQL and then how to install pgvector (a vector search extension for PostgreSQL.) The Book Search Archive (our toy app to show off Mindfire’s growing low-cost open-source AI stack) currently utilizes a PostgreSQL database with vectors handled by pgvector to do our semantic searches using a HNSW index built-into pgvector. In this post, we'll be looking at how to use Neo4J, a graph database with vector support, to do RAG.
It can often be helpful to use git as a way to compare text, especially when using git to compare versions of output, so you can then use git-filter-repo to remove that part of the git history. However, it isn't the easiest tool to figure out. Suppose you need to remove a file from a GitHub repo, how would you do that? If you just remove the file and check it in the older commits still contain that file, which means it is still part of the overall repo, what to do? In today's blog, we'll explore how to remove a file from your GitHub repo history.
In a previous post, we talked about how AI developers can combine lexical and semantic search results to create a better answer/response. Today, in this article by Bruce Nielson, we'll look a bit deeper into how you can make your AI search tools far more intelligent.
In our last post, we looked at how to stream a text-to-speech model using the Hugging Face API. We had mixed results at best. So, in this post, we'll be covering how to do text-to-speech (TTS) using a local TTS model with the help of Suno Bark and Hugging Face.
Looking to make your AI a bit more personable or easier to communicate with? In today's article by Bruce Nielson, we'll be testing out some ways of making an LLM have text-to-speech functionality using the Hugging Face framework.
In our last AI tutorial, we talked about using the built-in Haystack component for loading PDF documents called PyPDFToDocument. However, we feel that sometimes the results are a bit underwhelming. In today's article, we'll be exploring some potentially better methods.
In our ongoing series of AI tutorials, we have been focusing on using EBUP documents instead of PDFs. In today's article, we will dive a bit into the reasons for why that is, and the differences between PDFs and HTML documents when it comes to AI frameworks. We also have a look into how you can use PDF documents with AI.
In a previous post, our expert on AI, Bruce Nielson, discussed how to run a hybrid search using AI wherein we could merge a lexical search and a semantic search together to create single higher quality result. Today, we will dive into how to conduct just a lexical search as to better showcase the differences.
In a previous blog post, we introduced our "Book Search Archive," which we have been using to experiment with using AI to search and extrapolate information. In today's blog post with Bruce Nielson, we'll be discussing "hybrid searches" to better improve upon our AI model and showcase the power of an AI-driven search.