<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Qdrant Blog on Qdrant - Vector Database</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/</link><description>Recent content in Qdrant Blog on Qdrant - Vector Database</description><generator>Hugo</generator><language>en-us</language><managingEditor>info@qdrant.tech (Andrey Vasnetsov)</managingEditor><webMaster>info@qdrant.tech (Andrey Vasnetsov)</webMaster><lastBuildDate>Fri, 20 Feb 2026 00:00:00 -0800</lastBuildDate><atom:link href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/index.xml" rel="self" type="application/rss+xml"/><item><title>Vultr and Qdrant Hybrid Cloud Support Next-Gen AI Projects</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-vultr/</link><pubDate>Wed, 10 Apr 2024 00:08:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-vultr/</guid><description>&lt;p>We’re excited to share that Qdrant and &lt;a href="https://www.vultr.com/" target="_blank" rel="noopener nofollow">Vultr&lt;/a> are partnering to provide seamless scalability and performance for vector search workloads. With Vultr&amp;rsquo;s global footprint and customizable platform, deploying vector search workloads becomes incredibly flexible. Qdrant&amp;rsquo;s new &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a> offering and its Kubernetes-native design, coupled with Vultr&amp;rsquo;s straightforward virtual machine provisioning, allows for simple setup when prototyping and building next-gen AI apps.&lt;/p>
&lt;h4 id="adapting-to-diverse-ai-development-needs-with-customization-and-deployment-flexibility">Adapting to Diverse AI Development Needs with Customization and Deployment Flexibility&lt;/h4>
&lt;p>In the fast-paced world of AI and ML, businesses are eagerly integrating AI and generative AI to enhance their products with new features like AI assistants, develop new innovative solutions, and streamline internal workflows with AI-driven processes. Given the diverse needs of these applications, it&amp;rsquo;s clear that a one-size-fits-all approach doesn&amp;rsquo;t apply to AI development. This variability in requirements underscores the need for adaptable and customizable development environments.&lt;/p></description></item><item><title>STACKIT and Qdrant Hybrid Cloud for Best Data Privacy</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-stackit/</link><pubDate>Wed, 10 Apr 2024 00:07:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-stackit/</guid><description>&lt;p>Qdrant and &lt;a href="https://www.stackit.de/en/" target="_blank" rel="noopener nofollow">STACKIT&lt;/a> are thrilled to announce that developers are now able to deploy a fully managed vector database to their STACKIT environment with the introduction of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a>. This is a great step forward for the German AI ecosystem as it enables developers and businesses to build cutting edge AI applications that run on German data centers with full control over their data.&lt;/p>
&lt;p>Vector databases are an essential component of the modern AI stack. They enable rapid and accurate retrieval of high-dimensional data, crucial for powering search, recommendation systems, and augmenting machine learning models. In the rising field of GenAI, vector databases power retrieval-augmented-generation (RAG) scenarios as they are able to enhance the output of large language models (LLMs) by injecting relevant contextual information. However, this contextual information is often rooted in confidential internal or customer-related information, which is why enterprises are in pursuit of solutions that allow them to make this data available for their AI applications without compromising data privacy, losing data control, or letting data exit the company&amp;rsquo;s secure environment.&lt;/p></description></item><item><title>Qdrant Hybrid Cloud and Scaleway Empower GenAI</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-scaleway/</link><pubDate>Wed, 10 Apr 2024 00:06:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-scaleway/</guid><description>&lt;p>In a move to empower the next wave of AI innovation, Qdrant and &lt;a href="https://www.scaleway.com/en/" target="_blank" rel="noopener nofollow">Scaleway&lt;/a> collaborate to introduce &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a>, a fully managed vector database that can be deployed on existing Scaleway environments. This collaboration is set to democratize access to advanced AI capabilities, enabling developers to easily deploy and scale vector search technologies within Scaleway&amp;rsquo;s robust and developer-friendly cloud infrastructure. By focusing on the unique needs of startups and the developer community, Qdrant and Scaleway are providing access to intuitive and easy to use tools, making cutting-edge AI more accessible than ever before.&lt;/p></description></item><item><title>Red Hat OpenShift and Qdrant Hybrid Cloud Offer Seamless and Scalable AI</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-red-hat-openshift/</link><pubDate>Thu, 11 Apr 2024 00:04:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-red-hat-openshift/</guid><description>&lt;p>We’re excited about our collaboration with Red Hat to bring the Qdrant vector database to &lt;a href="https://www.redhat.com/en/technologies/cloud-computing/openshift" target="_blank" rel="noopener nofollow">Red Hat OpenShift&lt;/a> customers! With the release of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a>, developers can now deploy and run the Qdrant vector database directly in their Red Hat OpenShift environment. This collaboration enables developers to scale more seamlessly, operate more consistently across hybrid cloud environments, and maintain complete control over their vector data. This is a big step forward in simplifying AI infrastructure and empowering data-driven projects, like retrieval augmented generation (RAG) use cases, advanced search scenarios, or recommendations systems.&lt;/p></description></item><item><title>Qdrant and OVHcloud Bring Vector Search to All Enterprises</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-ovhcloud/</link><pubDate>Wed, 10 Apr 2024 00:05:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-ovhcloud/</guid><description>&lt;p>With the official release of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a>, businesses running their data infrastructure on &lt;a href="https://ovhcloud.com/" target="_blank" rel="noopener nofollow">OVHcloud&lt;/a> are now able to deploy a fully managed vector database in their existing OVHcloud environment. We are excited about this partnership, which has been established through the &lt;a href="https://opentrustedcloud.ovhcloud.com/en/" target="_blank" rel="noopener nofollow">OVHcloud Open Trusted Cloud&lt;/a> program, as it is based on our shared understanding of the importance of trust, control, and data privacy in the context of the emerging landscape of enterprise-grade AI applications. As part of this collaboration, we are also providing a detailed use case tutorial on building a recommendation system that demonstrates the benefits of running Qdrant Hybrid Cloud on OVHcloud.&lt;/p></description></item><item><title>New RAG Horizons with Qdrant Hybrid Cloud and LlamaIndex</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-llamaindex/</link><pubDate>Wed, 10 Apr 2024 00:04:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-llamaindex/</guid><description>&lt;p>We&amp;rsquo;re happy to announce the collaboration between &lt;a href="https://www.llamaindex.ai/" target="_blank" rel="noopener nofollow">LlamaIndex&lt;/a> and &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant’s new Hybrid Cloud launch&lt;/a>, aimed at empowering engineers and scientists worldwide to swiftly and securely develop and scale their GenAI applications. By leveraging LlamaIndex&amp;rsquo;s robust framework, users can maximize the potential of vector search and create stable and effective AI products. Qdrant Hybrid Cloud offers the same Qdrant functionality on a Kubernetes-based architecture, which further expands the ability of LlamaIndex to support any user on any environment.&lt;/p></description></item><item><title>Developing Advanced RAG Systems with Qdrant Hybrid Cloud and LangChain</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-langchain/</link><pubDate>Sun, 14 Apr 2024 00:04:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-langchain/</guid><description>&lt;p>&lt;a href="https://www.langchain.com/" target="_blank" rel="noopener nofollow">LangChain&lt;/a> and Qdrant are collaborating on the launch of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a>, which is designed to empower engineers and scientists globally to easily and securely develop and scale their GenAI applications. Harnessing LangChain’s robust framework, users can unlock the full potential of vector search, enabling the creation of stable and effective AI products. Qdrant Hybrid Cloud extends the same powerful functionality of Qdrant onto a Kubernetes-based architecture, enhancing LangChain’s capability to cater to users across any environment.&lt;/p></description></item><item><title>Cutting-Edge GenAI with Jina AI and Qdrant Hybrid Cloud</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-jinaai/</link><pubDate>Wed, 10 Apr 2024 00:03:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-jinaai/</guid><description>&lt;p>We&amp;rsquo;re thrilled to announce the collaboration between Qdrant and &lt;a href="https://jina.ai/" target="_blank" rel="noopener nofollow">Jina AI&lt;/a> for the launch of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a>, empowering users worldwide to rapidly and securely develop and scale their AI applications. By leveraging Jina AI&amp;rsquo;s top-tier large language models (LLMs), engineers and scientists can optimize their vector search efforts. Qdrant&amp;rsquo;s latest Hybrid Cloud solution, designed natively with Kubernetes, seamlessly integrates with Jina AI&amp;rsquo;s robust embedding models and APIs. This synergy streamlines both prototyping and deployment processes for AI solutions.&lt;/p></description></item><item><title>Qdrant Hybrid Cloud and Haystack for Enterprise RAG</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-haystack/</link><pubDate>Wed, 10 Apr 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-haystack/</guid><description>&lt;p>We’re excited to share that Qdrant and &lt;a href="https://haystack.deepset.ai/" target="_blank" rel="noopener nofollow">Haystack&lt;/a> are continuing to expand their seamless integration to the new &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a> offering, allowing developers to deploy a managed &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/articles/what-is-a-vector-database/">vector database&lt;/a> in their own environment of choice. Earlier this year, both Qdrant and Haystack, started to address their user’s growing need for production-ready retrieval-augmented-generation (RAG) deployments. The ability to build and deploy AI apps anywhere now allows for complete data sovereignty and control. This gives large enterprise customers the peace of mind they need before they expand AI functionalities throughout their operations.&lt;/p></description></item><item><title>Qdrant Hybrid Cloud and DigitalOcean for Scalable and Secure AI Solutions</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-digitalocean/</link><pubDate>Thu, 11 Apr 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-digitalocean/</guid><description>&lt;p>Developers are constantly seeking new ways to enhance their AI applications with new customer experiences. At the core of this are vector databases, as they enable the efficient handling of complex, unstructured data, making it possible to power applications with semantic search, personalized recommendation systems, and intelligent Q&amp;amp;A platforms. However, when deploying such new AI applications, especially those handling sensitive or personal user data, privacy becomes important.&lt;/p>
&lt;p>&lt;a href="https://www.digitalocean.com/" target="_blank" rel="noopener nofollow">DigitalOcean&lt;/a> and Qdrant are actively addressing this with an integration that lets developers deploy a managed vector database in their existing DigitalOcean environments. With the recent launch of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a>, developers can seamlessly deploy Qdrant on DigitalOcean Kubernetes (DOKS) clusters, making it easier for developers to handle vector databases without getting bogged down in the complexity of managing the underlying infrastructure.&lt;/p></description></item><item><title>Enhance AI Data Sovereignty with Aleph Alpha and Qdrant Hybrid Cloud</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-aleph-alpha/</link><pubDate>Thu, 11 Apr 2024 00:01:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-aleph-alpha/</guid><description>&lt;p>&lt;a href="https://aleph-alpha.com/" target="_blank" rel="noopener nofollow">Aleph Alpha&lt;/a> and Qdrant are on a joint mission to empower the world’s best companies in their AI journey. The launch of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a> furthers this effort by ensuring complete data sovereignty and hosting security. This latest collaboration is all about giving enterprise customers complete transparency and sovereignty to make use of AI in their own environment. By using a hybrid cloud vector database, those looking to leverage vector search for the AI applications can now ensure their proprietary and customer data is completely secure.&lt;/p></description></item><item><title>Elevate Your Data With Airbyte and Qdrant Hybrid Cloud</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-airbyte/</link><pubDate>Wed, 10 Apr 2024 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-airbyte/</guid><description>&lt;p>In their mission to support large-scale AI innovation, &lt;a href="https://airbyte.com/" target="_blank" rel="noopener nofollow">Airbyte&lt;/a> and Qdrant are collaborating on the launch of Qdrant’s new offering - &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a>. This collaboration allows users to leverage the synergistic capabilities of both Airbyte and Qdrant within a private infrastructure. Qdrant’s new offering represents the first managed &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/articles/what-is-a-vector-database/">vector database&lt;/a> that can be deployed in any environment. Businesses optimizing their data infrastructure with Airbyte are now able to host a vector database either on premise, or on a public cloud of their choice - while still reaping the benefits of a managed database product.&lt;/p></description></item><item><title>Qdrant 1.17 - Relevance Feedback &amp; Search Latency Improvements</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.17.x/</link><pubDate>Fri, 20 Feb 2026 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.17.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.17.0" target="_blank" rel="noopener nofollow">&lt;strong>Qdrant 1.17.0 is out!&lt;/strong>&lt;/a> Let’s look at the main features for this version:&lt;/p>
&lt;p>&lt;strong>Relevance Feedback Query:&lt;/strong> Improve the quality of search results by incorporating information about their relevance.&lt;/p>
&lt;p>&lt;strong>Search Latency Improvements:&lt;/strong> Manage search latency with new tools, such as an update queue and delayed fan-outs, as well as many internal search performance improvements.&lt;/p>
&lt;p>&lt;strong>Greater Operational Observability:&lt;/strong> Better insights into operational metrics and faster troubleshooting with a new cluster-wide telemetry API and segment optimization monitoring.&lt;/p></description></item><item><title>How Bazaarvoice scaled AI-powered product insights with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-bazaarvoice/</link><pubDate>Tue, 10 Feb 2026 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-bazaarvoice/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-bazaarvoice/bazaarvoice-bento.png" alt="Bazaarvoice overview">&lt;/p>
&lt;h2 id="turning-billions-of-reviews-into-real-time-actionable-intelligence">Turning billions of reviews into real-time, actionable intelligence&lt;/h2>
&lt;p>Bazaarvoice powers ratings and reviews across the global ecommerce ecosystem, connecting brands, retailers, and consumers through authentic product feedback. From brand-owned storefronts to major retailers, Bazaarvoice sources, verifies, and amplifies reviews at a scale few companies ever reach.&lt;/p>
&lt;p>As large language models (LLMs) became production-ready, Bazaarvoice saw an opportunity to enhance the experiences of their clients&amp;rsquo; shoppers. The company wanted to help shoppers ask questions directly on product detail pages using natural language and help brands extract meaningful insights from vast volumes of unstructured customer feedback.&lt;/p></description></item><item><title>Sketch &amp; Search: Google Deepmind x Qdrant x Freepik Hackathon Winners</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/sketch-n-search-winners/</link><pubDate>Tue, 03 Feb 2026 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/sketch-n-search-winners/</guid><description>&lt;p>Builders from around the world came together for Sketch &amp;amp; Search, a global hackathon powered by Google DeepMind, Freepik, and Qdrant, to explore the future of AI-driven creative pipelines.&lt;/p>
&lt;p>Teams were challenged to go beyond single-prompt generation and build end-to-end systems combining generative models, visual creation, and vector search. Submissions showcased consistent characters and style memory, image-as-prompt and image-to-video workflows, intelligent asset discovery, recommendations, and built-in brand-safe guardrails.&lt;/p>
&lt;p>The hackathon kicked off in San Francisco on November 22, 2025, followed by a two-week virtual build window and a live demo day where winners were announced. Projects were judged on creative quality, effective search and similarity, UX tradeoffs, guardrails, and real-world applicability.&lt;/p></description></item><item><title>How Anima Health scaled clinical document intelligence with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-anima-health/</link><pubDate>Wed, 28 Jan 2026 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-anima-health/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-anima-health/anima-bento.png" alt="Anima Health scaled privacy-first clinical intelligence with Qdrant">&lt;/p>
&lt;p>Primary care systems across the UK are under intense strain. General practitioners (GPs) balance their time with patient demand, understaffing, administrative burden vs. delivering care. &lt;a href="https://animahealth.com/" target="_blank">Anima Health&lt;/a> set out to address this challenge by building a clinical operating system designed to make primary care more efficient, more informed, and more humane for both clinicians and patients.&lt;/p>
&lt;p>At the heart of Anima’s platform is the ability to process large volumes of unstructured clinical data, including documents, test results, referral letters, and notes, while maintaining strict privacy guarantees. To achieve this at scale, Anima relies on Qdrant as a core infrastructure component for vector search, similarity analysis, and agentic AI workflows.&lt;/p></description></item><item><title>Qdrant Academy Expands with Official Certification</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-certification-launch/</link><pubDate>Wed, 28 Jan 2026 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-certification-launch/</guid><description>&lt;p>Since we first announced &lt;strong>&lt;a href="https://qdrant.tech/course/" target="_blank" rel="noopener nofollow">Qdrant Academy&lt;/a>&lt;/strong>, our mission has been to provide developers with more than just documentation. We wanted to build a structured path to mastering vector search. As the AI search landscape matures, the distinction between a simple storage layer and a high-performance vector search engine has become the defining factor in production-grade RAG and recommendation systems.&lt;/p>
&lt;p>Today, we are thrilled to take the next step in that mission. It’s time to move from learning to proving your expertise with the launch of our first official certification.&lt;/p></description></item><item><title>How Kakao Built an AI-Powered Internal Service Desk with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kakao/</link><pubDate>Tue, 27 Jan 2026 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kakao/</guid><description>&lt;p>&lt;a href="https://www.kakaocorp.com/" target="_blank">Kakao&lt;/a> is one of South Korea&amp;rsquo;s leading technology companies, best known for KakaoTalk, the country&amp;rsquo;s dominant messaging platform with over 48 million monthly active users. Beyond messaging, Kakao operates a broad ecosystem of services including maps, mobility, fintech, and enterprise solutions.&lt;/p>
&lt;h2 id="helping-employees-find-answers-faster-without-sacrificing-precision-or-control">Helping employees find answers faster without sacrificing precision or control&lt;/h2>
&lt;p>Kakao’s Connectivity Platform team set out to solve a familiar internal problem: employees across the organization needed a faster, more reliable way to get answers about internal systems, APIs, and operational procedures. The result was &lt;strong>Service Desk Agent&lt;/strong>, an AI-powered internal service desk designed to answer questions in natural language using Kakao’s internal documentation and historical inquiry data.&lt;/p></description></item><item><title>Building real-time multimodal similarity search in Flipkart Trust &amp; Safety with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-flipkart/</link><pubDate>Fri, 09 Jan 2026 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-flipkart/</guid><description>&lt;h3 id="tackling-fraud-and-abuse-with-scalable-similarity-search">Tackling fraud and abuse with scalable similarity search&lt;/h3>
&lt;p>At Flipkart, the Trust &amp;amp; Safety team is focused on detecting and preventing platform abuse and fraud. A critical part of this work involves running large-scale similarity searches across customer and seller-submitted data, particularly images. This allows the team to identify patterns associated with fraudulent activity, such as repeat returns or duplicate seller claims, before they cause downstream harm.&lt;/p>
&lt;p>&lt;em>“Platform integrity is a constant challenge. To stay ahead of fraudulent actors, we needed a system that could compare multimodal data in real time, not just in long-running batch jobs.”&lt;/em>&lt;/p></description></item><item><title>Qdrant 2025 Recap: Powering the Agentic Era</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/2025-recap/</link><pubDate>Wed, 17 Dec 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/2025-recap/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/2025-recap/2025-infographic.png" alt="Infographic">&lt;/p>
&lt;p>This year was a defining year for Qdrant. Not because of a single feature or launch, but because of a clear shift in what the platform enables. As AI systems moved from static assistants to autonomous, multi-step agents, the demands placed on retrieval changed fundamentally. Speed alone was no longer enough. Production systems now require precise relevance control, predictable performance at scale, and the flexibility to run wherever data and users live.&lt;/p></description></item><item><title>New DeepLearning.AI Course on Multi-Vector Image Retrieval with ColPali and MUVERA</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-deeplearning-ai-multi-vector-image-retrieval/</link><pubDate>Thu, 11 Dec 2025 17:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-deeplearning-ai-multi-vector-image-retrieval/</guid><description>&lt;p>We&amp;rsquo;re thrilled to announce our latest collaboration with DeepLearning.AI: &lt;a href="https://www.deeplearning.ai/short-courses/multi-vector-image-retrieval/" target="_blank" rel="noopener nofollow">Multi-Vector Image Retrieval&lt;/a>. Building on the success of our previous course on retrieval optimization, this intermediate-level course takes you deeper into advanced search techniques that are transforming how AI systems understand and retrieve visual content.&lt;/p>
&lt;p>Led once again by Qdrant&amp;rsquo;s Kacper Łukawski, Senior Developer Advocate, this free course is designed for AI builders working with multi-modal data who want to implement cutting-edge image retrieval in their applications.&lt;/p></description></item><item><title>How Cosmos delivered editorial-grade visual search with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-cosmos/</link><pubDate>Thu, 20 Nov 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-cosmos/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-cosmos/cosmos-bento-box-dark.jpg" alt="How Cosmos powered text, color, and hybrid search with Qdrant">&lt;/p>
&lt;p>&lt;a href="https://www.cosmos.so/" target="_blank">Cosmos&lt;/a> is redefining how people find inspiration online. It’s a visual search app built for creative professionals and everyday users who want a clean, meditative, ad-free place to collect and curate ideas. In contrast to feeds dominated by doomscrolling, ads, and generative “AI slop,” Cosmos focuses on high-quality, human-made content. AI-powered search and captions connect each image to its creator, making visual discovery richer, more accurate, and easier to navigate.&lt;/p></description></item><item><title>Qdrant 1.16 - Tiered Multitenancy &amp; Disk-Efficient Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.16.x/</link><pubDate>Wed, 19 Nov 2025 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.16.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.16.0" target="_blank" rel="noopener nofollow">&lt;strong>Qdrant 1.16.0 is out!&lt;/strong>&lt;/a> Let’s look at the main features for this version:&lt;/p>
&lt;p>&lt;strong>Tiered Multitenancy:&lt;/strong> An improved approach to multitenancy that enables you to combine small and large tenants in a single collection, with the ability to promote growing tenants to dedicated shards.&lt;/p>
&lt;p>&lt;strong>ACORN&lt;/strong>: A new search algorithm that improves the quality of filtered vector search in cases of multiple filters with weak selectivity.&lt;/p>
&lt;p>&lt;strong>Inline Storage&lt;/strong>: A new HNSW index storage mode that stores vector data directly inside HNSW nodes, enabling efficient disk-based vector search.&lt;/p></description></item><item><title>How Dragonfruit AI scaled real-time computer vision with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-dragonfruit/</link><pubDate>Thu, 13 Nov 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-dragonfruit/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-dragonfruit/dragonfruit-bento-box-dark.png" alt="Dragonfruit Overview">&lt;/p>
&lt;h2 id="dragonfruit-ai-scales-real-time-computer-vision-with-qdrant">Dragonfruit AI scales real-time computer vision with Qdrant&lt;/h2>
&lt;h3 id="building-enterprise-ready-computer-vision">Building enterprise-ready computer vision&lt;/h3>
&lt;p>&lt;a href="https://www.dragonfruit.ai/" target="_blank">Dragonfruit AI&lt;/a> builds enterprise-ready computer vision solutions, turning ordinary IP camera feeds into actionable insights for security, safety, operations, and compliance. Their platform ships a suite of AI “agents,” including retail loss prevention and warehouse safety, that run with a patented “Split AI” approach: real-time inference on-prem for speed and bandwidth efficiency, paired with cloud services for aggregation and search. Dragonfruit needed to keep total cost of ownership low, meet strict latency targets, and operate reliably across hundreds of sites with thousands of cameras; all without asking customers to rip and replace existing infrastructure.&lt;/p></description></item><item><title>How Xaver scaled personalized financial advice with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-xaver/</link><pubDate>Thu, 13 Nov 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-xaver/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-xaver/xaver-bento-box-dark.jpg" alt="Xaver Overview">&lt;/p>
&lt;h2 id="how-xaver-built-its-ai-knowledge-engine-with-qdrant">How Xaver Built its AI Knowledge Engine with Qdrant&lt;/h2>
&lt;p>&lt;a href="https://www.xaver.com/" target="_blank">Xaver&lt;/a> is tackling a core challenge in the financial industry: scaling personalized financial and retirement advice. As demographic shifts increase demand for private pensions, traditional, manual consultation models are proving too slow and costly to support everyone who needs help.&lt;/p>
&lt;p>To solve this, Xaver provides banks, insurers and distributors with a vertically specialized and compliant agentic sales platform. This technology acts as both an AI sales assistant for human advisors and as an autonomous agent to deliver compliant, personalized financial guidance to consumers via phone, video avatars, messengers and web journeys.&lt;/p></description></item><item><title>Qdrant Academy Launches with Qdrant Essentials Course</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-academy-launch/</link><pubDate>Thu, 23 Oct 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-academy-launch/</guid><description>&lt;p>Today, we’re proud to launch &lt;strong>Qdrant Academy&lt;/strong>, a new learning site designed to help developers, data scientists, and engineers build real-world vector search systems.&lt;/p>
&lt;p>This started with a mission: to make learning more accessible, scalable, and frictionless for practitioners around the world. And today, we crossed the first milestone in our mission by launching our first course, &lt;a href="https://qdrant.tech/course/essentials/" target="_blank" rel="noopener nofollow">Qdrant Essentials&lt;/a>.&lt;/p>
&lt;p>With &lt;strong>Qdrant Essentials&lt;/strong>, you get a free, self-paced, structured learning course that teaches the fundamentals of vector search, embeddings, and productionalizing AI systems using Qdrant. You’ll learn not just what vector search &lt;em>is&lt;/em>, but how to build, query, and optimize search with real projects, exercises, and examples.&lt;/p></description></item><item><title>How TrustGraph built enterprise-grade agentic AI with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-trustgraph/</link><pubDate>Fri, 10 Oct 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-trustgraph/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-trustgraph/trustgraph-bento-box-dark.jpg" alt="TrustGraph Overview">&lt;/p>
&lt;h1 id="trustgraph--qdrant-a-technical-deep-dive">TrustGraph + Qdrant: A Technical Deep Dive&lt;/h1>
&lt;p>When teams first experiment with agentic AI, the journey often starts with a slick demo: a few APIs stitched together, a large language model answering questions, and just enough smoke and mirrors to impress stakeholders.&lt;/p>
&lt;p>But as soon as those demos face enterprise requirements (constant data ingestion, compliance, thousands of users, 24×7 uptime), the illusion breaks. Services stall at the first failure, query reliability plummets, and regulatory guardrails are nowhere to be found. What worked in a five-minute demo becomes impossible to maintain in production.&lt;/p></description></item><item><title>All Vectors Lead to Community: Vector Space Day 2025 Recap</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-space-day-2025-recap/</link><pubDate>Tue, 30 Sep 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-space-day-2025-recap/</guid><description>&lt;p>&lt;a href="https://drive.google.com/drive/folders/1DbRQmwmMg8U255g3-ooo7Ojt9FPgHNc9?usp=sharing" target="_blank" rel="noopener nofollow">&lt;strong>[See all event slides here]&lt;/strong>&lt;/a>&lt;/p>
&lt;p>On September 26, 2025, nearly &lt;strong>400 developers, researchers, and engineers&lt;/strong> came together at the Colosseum Theater in Berlin for the first-ever &lt;strong>Qdrant Vector Space Day&lt;/strong>.&lt;/p>
&lt;p>From the start, the day belonged to the community. Over coffee and fresh Qdrant swag, the conversations quickly moved to embeddings, hybrid search, and AI agents. Laptops flipped open, QR codes were shared, and the room filled with people eager to trade ideas and learn from one another.&lt;/p></description></item><item><title>Thinking Outside the Bot with 2025 Hackathon Winners</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-space-hackathon-winners-2025/</link><pubDate>Mon, 29 Sep 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-space-hackathon-winners-2025/</guid><description>&lt;p>Over the past several weeks, builders from around the world proved that vector search is about much more than chatbots. We challenged teams to think beyond RAG, and they delivered: robotics safety reflexes, event discovery on routes, 3D shopping, video game characters, and more.&lt;/p>
&lt;p>Winners were announced live in Berlin on Friday, September 26, 2025 at &lt;a href="https://luma.com/p7w9uqtz" target="_blank" rel="noopener nofollow">Vector Space Day&lt;/a>. Full hackathon details are here: &lt;a href="https://try.qdrant.tech/hackathon-2025" target="_blank" rel="noopener nofollow">Hackathon page&lt;/a>.&lt;/p>
&lt;p>With numerous submissions from around the world, hackathon judges used criteria of Creativity, Technical Depth, and Qdrant Usage to evaluate each submission and determine the top projects. There was $10,000 in prizes from Qdrant as well as many additional bonus prizes for using partner tech:&lt;/p></description></item><item><title>Announcing the Vector Space Day 2025 Speaker Lineup</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-space-day-lineup-2025/</link><pubDate>Mon, 15 Sep 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-space-day-lineup-2025/</guid><description>&lt;h1 id="announcing-the-vector-space-day-2025-speaker-lineup">Announcing the Vector Space Day 2025 Speaker Lineup&lt;/h1>
&lt;p>We are just days away from &lt;a href="https://luma.com/p7w9uqtz" target="_blank" rel="noopener nofollow">Vector Space Day&lt;/a> in Berlin, and the full speaker lineup is here! This year’s program spans keynotes, deep-dive technical sessions, and lightning talks, covering everything from benchmarking search engines to scalable AI memory and multimodal embeddings. Here’s what to expect.&lt;/p>
&lt;h2 id="opening-keynotes">Opening Keynotes&lt;/h2>
&lt;p>The day begins with perspectives from across the ecosystem:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Andre Zayarni, Andrey Vasnetsov,&lt;/strong> and &lt;strong>Neil Kanungo&lt;/strong> sharing Qdrant’s vision for the future of vector search and how devs can engage with the Qdrant Community.&lt;/li>
&lt;li>&lt;strong>Robert Eichenseer (Microsoft), Kevin Cochrane (Vultr),&lt;/strong> and &lt;strong>Inaam Syed (AWS)&lt;/strong> offering insights on how cloud, infrastructure, and developer communities are reshaping AI systems.&lt;/li>
&lt;/ul>
&lt;h2 id="breakout-sessions">Breakout Sessions&lt;/h2>
&lt;h4 id="track-a-milky-way---architectures-infrastructure-and-multimodal-retrieval">Track A: Milky Way - Architectures, Infrastructure and Multimodal Retrieval&lt;/h4>
&lt;ul>
&lt;li>&lt;strong>AskNews&lt;/strong> - &lt;em>Building a News Sleuth for the Deep Research Paradigm:&lt;/em> How high-performance hybrid retrieval can support investigative journalism and geopolitical risk monitoring.&lt;/li>
&lt;li>&lt;strong>Delivery Hero&lt;/strong> - &lt;em>How to Cheat at Benchmarking Search Engines:&lt;/em> Lessons from building reproducible benchmarking harnesses and public leaderboards.&lt;/li>
&lt;li>&lt;strong>Neo4j&lt;/strong> - &lt;em>Hands-On GraphRAG:&lt;/em> Practical guidance on combining knowledge graphs with RAG for more explainable retrieval.&lt;/li>
&lt;li>&lt;strong>Superlinked&lt;/strong> - &lt;em>Beyond Text-Only:&lt;/em> How mixture of encoders unlocks advanced retrieval using Google DeepMind’s latest embeddings.&lt;/li>
&lt;li>&lt;strong>Jina AI&lt;/strong> - &lt;em>Vision-Language Models for Embedding:&lt;/em> Training insights for multimodal embeddings that span text, diagrams, and UI screenshots.&lt;/li>
&lt;li>&lt;strong>TwelveLabs&lt;/strong> - &lt;em>Practical Multimodal Embeddings:&lt;/em> Real workflows for cross-modal video search and recommendations.&lt;/li>
&lt;li>&lt;strong>Baseten&lt;/strong> - &lt;em>High Throughput, Low Latency Embedding Pipelines:&lt;/em> Patterns and open-source tools for production-ready embedding inference.&lt;/li>
&lt;li>&lt;strong>Google&lt;/strong> &lt;strong>DeepMind&lt;/strong> - &lt;em>Vector Search with Gemini and EmbeddingGemma:&lt;/em> Deploying cutting-edge embeddings with the right indexing strategies.&lt;/li>
&lt;/ul>
&lt;h4 id="track-b-andromeda---ai-workflows-agents-and-applications">Track B: Andromeda - AI Workflows, Agents and Applications&lt;/h4>
&lt;ul>
&lt;li>&lt;strong>Linkup&lt;/strong> - &lt;em>Beyond Web Search:&lt;/em> Infrastructure for AI-native agents that need structured, real-time web intelligence.&lt;/li>
&lt;li>&lt;strong>Cognee&lt;/strong> - &lt;em>Building Scalable AI Memory:&lt;/em> Abstractions that sync graphs and vectors for durable, multi-backend AI memory.&lt;/li>
&lt;li>&lt;strong>n8n&lt;/strong> - &lt;em>Evaluate Your Qdrant-RAG Agents:&lt;/em> A live no-code session on agent evaluation using n8n’s native tools.&lt;/li>
&lt;li>&lt;strong>Arize AI&lt;/strong> - &lt;em>Self-Improving Evaluations:&lt;/em> Feedback loops and tracing for reliable agentic RAG in production.&lt;/li>
&lt;li>&lt;strong>LlamaIndex&lt;/strong> - &lt;em>Vector Databases for Workflow Engineering:&lt;/em> Using Qdrant to orchestrate context-aware AI pipelines.&lt;/li>
&lt;li>&lt;strong>deepset&lt;/strong> - &lt;em>Agent-Powered Retrieval with Haystack and Qdrant:&lt;/em> When retrieval agents outperform or overcomplicate pipelines.&lt;/li>
&lt;li>&lt;strong>GoodData&lt;/strong> - &lt;em>Scaling Real-Time RAG for Analytics:&lt;/em> Lessons from streaming BI artifacts into Qdrant for natural-language analytics.&lt;/li>
&lt;li>&lt;strong>Equal&lt;/strong> - &lt;em>Redefining Long-Term Memory:&lt;/em> Streaming-driven ingestion architectures that give agents enterprise-grade responsiveness.&lt;/li>
&lt;/ul>
&lt;h2 id="lightning-talks">Lightning Talks&lt;/h2>
&lt;p>The afternoon features rapid-fire sessions from innovators including:&lt;/p></description></item><item><title>How Tavus used Qdrant Edge to create conversational AI</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-tavus/</link><pubDate>Fri, 12 Sep 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-tavus/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-tavus/tavus-bento-box-dark.jpg" alt="Tavus Overview">&lt;/p>
&lt;h2 id="how-tavus-delivered-human-grade-conversational-ai-with-edge-retrieval-on-qdrant">How Tavus delivered human-grade conversational AI with edge retrieval on Qdrant&lt;/h2>
&lt;p>Tavus is a human–computer research lab building CVI, the &lt;a href="https://www.tavus.io/" target="_blank">Conversational Video Interface&lt;/a>. CVI presents a face-to-face AI that reads tone, gesture, and on-screen context in real time, allowing for humans to interface with powerful, functional AI like never before. The team’s north star was simple to say and hard to ship: conversations should feel natural. That meant tracking conversational dynamics like utterance-to-utterance timing, back-channeling, and turn-taking while grounding replies in a customer’s private knowledge.&lt;/p></description></item><item><title>Balancing Relevance and Diversity with MMR Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/mmr-diversity-aware-reranking/</link><pubDate>Thu, 04 Sep 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/mmr-diversity-aware-reranking/</guid><description>&lt;p>Variety is the spice of life! Yet often, with search engines, users find that the results are too similar to get value. You search for a black jacket on your favorite shopping site, and you get 5 black full zip bomber jackets. Search for a black dress and you get 5 strapless dresses. Traditional vector search focuses on returning the most relevant items, which creates an echo chamber of similar results.&lt;/p></description></item><item><title>How Fieldy AI Achieved Reliable AI Memory with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-fieldy/</link><pubDate>Thu, 04 Sep 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-fieldy/</guid><description>&lt;h2 id="fieldy-ais-migration-to-qdrant-building-a-fault-tolerant-ai-memory-platform">Fieldy AI’s migration to Qdrant: Building a fault-tolerant AI memory platform&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-fieldy/case-study-fieldy-bento-dark.jpg" alt="How Fieldy AI Achieved Reliable AI Memory with Qdrant">&lt;/p>
&lt;h3 id="capturing-and-retrieving-a-lifetime-of-conversations">Capturing and retrieving a lifetime of conversations&lt;/h3>
&lt;p>&lt;a href="https://fieldy.ai/" target="_blank">Fieldy&lt;/a> is a hands-free wearable AI note taker that continuously records, transcribes, and organizes real-world conversations into your personal, searchable memory. The system’s goal is simple in concept but demanding in execution: capture every relevant spoken interaction, transcribe it with high accuracy, and make it instantly retrievable. This requires a robust ingestion pipeline, a scalable &lt;a href="https://qdrant.tech/documentation/overview/" target="_blank" rel="noopener nofollow">vector search&lt;/a> layer, and a retrieval process capable of handling growing volumes of multimodal data without introducing latency or errors.&lt;/p></description></item><item><title>How OpenTable Reinvented Restaurant Discovery with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-opentable/</link><pubDate>Tue, 02 Sep 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-opentable/</guid><description>&lt;h2 id="reinventing-restaurant-discovery-how-opentable-built-concierge-an-ai-dining-assistant">&lt;strong>Reinventing Restaurant Discovery: How OpenTable built Concierge, an AI Dining Assistant&lt;/strong>&lt;/h2>
&lt;h3 id="recognizing-that-ai-would-redefine-restaurant-discovery">Recognizing that AI would redefine restaurant discovery&lt;/h3>
&lt;p>When generative AI tools entered the mainstream, OpenTable knew diners would change how they find and choose restaurants. People were beginning to expect conversational, intelligent and context-aware assistants, rather than static search boxes.&lt;/p>
&lt;p>Patrick Lombardo, Staff ML Engineer at OpenTable, recalls that the team wanted to move quickly. “We knew early on that generative AI was going to change user expectations. Concierge was an opportunity for us to transform the way that diners discover restaurants while building the tooling and infrastructure that will support future AI-powered experiences.”&lt;/p></description></item><item><title>Untangling Relevance Score Boosting and Decay Functions</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/decay-functions/</link><pubDate>Mon, 01 Sep 2025 14:55:45 +0200</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/decay-functions/</guid><description>&lt;p>A problem we&amp;rsquo;ve noticed while monitoring the &lt;a href="https://discord.gg/d4MPnX3s" target="_blank" rel="noopener nofollow">Qdrant Discord Community&lt;/a> is that due to the extensive list of expressions that the &lt;a href="https://qdrant.tech/documentation/concepts/hybrid-queries/#score-boosting" target="_blank" rel="noopener nofollow">score boosting&lt;/a> functionality provides, there&amp;rsquo;s room for confusion on how it&amp;rsquo;s supposed to be applied. And that might block you from moving the business logic behind relevance scoring into the Qdrant search engine. We don&amp;rsquo;t want that!&lt;/p>
&lt;p>In this blog, we&amp;rsquo;d like to de-spooky-fy the &lt;strong>decay functions&lt;/strong> part of the score boosting, or, more precisely: &lt;code>LinDecayExpression&lt;/code>, &lt;code>ExpDecayExpression&lt;/code>, and &lt;code>GaussDecayExpression&lt;/code> &amp;ndash; frequent guests on the Discord &lt;em>#ask-for-help&lt;/em> channel.&lt;/p></description></item><item><title>How PortfolioMind Delivered Real-Time Crypto Intelligence with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-portfolio-mind/</link><pubDate>Thu, 31 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-portfolio-mind/</guid><description>&lt;h2 id="how-portfoliomind-delivered-real-time-crypto-intelligence-with-qdrant">&lt;strong>How PortfolioMind delivered real-time crypto intelligence with Qdrant&lt;/strong>&lt;/h2>
&lt;p>The crypto world is an inherently noisy and volatile place. Markets shift quickly, narratives change overnight, and wallet activities conceal subtle yet critical patterns. For PortfolioMind, Web3-native AI research copilot built using the &lt;a href="https://spoonai.io/" target="_blank" rel="noopener nofollow">SpoonOS framework&lt;/a>, the challenge was not only finding just finding relevant information, but also surfacing it in real-time.&lt;/p>
&lt;h3 id="challenge-moving-beyond-static-insights">Challenge: Moving beyond static insights&lt;/h3>
&lt;p>Most crypto platforms presume users want simple token tracking. PortfolioMind, however, recognized that real research behaviors are dynamic. Users pivot rapidly between topics like L2 scaling, meme tokens, protocol risks, and DeFi yield fluctuations based on real-time events.&lt;/p></description></item><item><title>Qdrant Edge: Vector Search for Embedded AI</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-edge/</link><pubDate>Tue, 29 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-edge/</guid><description>&lt;h1 id="qdrant-edge-private-beta-vector-search-for-embedded-ai">Qdrant Edge (Private Beta): Vector Search for Embedded AI&lt;/h1>
&lt;p>Over the past two years, vector search has become foundational infrastructure for AI applications, from retrieval-augmented generation (RAG) to agentic reasoning. But as AI systems extend beyond cloud-hosted inference into the physical world - running on devices like robots, kiosks, home assistants, and mobile phones - new constraints emerge. Low-latency retrieval, multimodal inputs, and bandwidth-independent operation will become first-class requirements. &lt;strong>Qdrant Edge&lt;/strong> is our response to this shift.&lt;/p></description></item><item><title>Qdrant for Research: The Story Behind ETH &amp; Stanford’s MIRIAD Dataset</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/miriad-qdrant/</link><pubDate>Wed, 23 Jul 2025 00:00:00 +0200</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/miriad-qdrant/</guid><description>&lt;p>This summer, researchers from ETH Zurich and Stanford &lt;a href="https://www.linkedin.com/posts/qinyue-zheng-526b391a4_we-just-released-a-million-scale-medical-activity-7337889277445365760-Criy" target="_blank" rel="noopener nofollow">released &lt;strong>MIRIAD&lt;/strong>&lt;/a>, an open source dataset of &lt;strong>5.8 million medical Question Answer pairs&lt;/strong>, each grounded in peer-reviewed literature.&lt;/p>
&lt;p>A dataset of this scale has the potential to become an &lt;strong>ultimate solution to the lack of structured, rich-in-context, high-quality data in the medical field&lt;/strong>. It is a powerful measure for a significant reduction of hallucinations in medical AI applications, created to be a knowledge base for Retrieval Augmented Generation (RAG) and a source for downstreaming embedding models.&lt;/p></description></item><item><title>Qdrant 1.15 - Smarter Quantization &amp; better Text Filtering</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.15.x/</link><pubDate>Fri, 18 Jul 2025 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.15.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.15.0" target="_blank" rel="noopener nofollow">&lt;strong>Qdrant 1.15.0 is out!&lt;/strong>&lt;/a> Let’s look at the main features for this version:&lt;/p>
&lt;p>&lt;strong>New quantizations:&lt;/strong> We introduce asymmetric quantization and 1.5 and 2-bit quantizations. Asymmetric quantization allows vectors and queries to have different quantization algorithms. 1.5 and 2-bit quantizations allow for improved accuracy.&lt;/p>
&lt;p>&lt;strong>Changes in text index&lt;/strong>: Introduction of a new multilingual tokenizer, stopwords support, stemming, and phrase matching.&lt;/p>
&lt;p>Various optimizations, including &lt;strong>HNSW healing&lt;/strong>, allowing HNSW indexes to reuse the old graph without a complete rebuild, and &lt;strong>Migration to Gridstore&lt;/strong> unlocks faster injestion.&lt;/p></description></item><item><title>Qdrant joins AI Agent category on AWS Marketplace to accelerate Agentic AI development</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/ai-agents-aws-marketplace/</link><pubDate>Wed, 16 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/ai-agents-aws-marketplace/</guid><description>&lt;h3 id="qdrant-is-now-available-in-the-new-aws-marketplace-ai-agents-and-tools-category">Qdrant is now available in the new AWS Marketplace AI Agents and Tools category.&lt;/h3>
&lt;p>Customers can now use AWS Marketplace to easily discover, buy, and deploy AI agents solutions, including Qdrant’s vector search engine using their AWS accounts, accelerating AI agent and agentic workflow development.&lt;/p>
&lt;p>Qdrant helps organizations build enterprise AI agents with long-term memory and real-time context retrieval by enabling step-aware reasoning and reliable decision-making across complex, unstructured data with a vector-native search engine built for accuracy, scale, and responsiveness.&lt;/p></description></item><item><title>How &amp;AI scaled global legal retrieval with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-and-ai/</link><pubDate>Tue, 15 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-and-ai/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-and-ai/and-ai-bento.jpg" alt="Bento Box">&lt;/p>
&lt;h2 id="how-ai-scaled-global-patent-retrieval-with-qdrant">How &amp;amp;AI scaled global patent retrieval with Qdrant&lt;/h2>
&lt;p>&lt;a href="https://tryandai.com/" target="_blank" rel="noopener nofollow">&amp;amp;AI&lt;/a> is on a mission to redefine patent litigation. Their platform helps legal professionals invalidate patents through intelligent prior art search, claim charting, and automated litigation support. To make this work at scale, CTO and co-founder Herbie Turner needed a vector database that could power fast, accurate retrieval across billions of documents without ballooning DevOps complexity. That’s where Qdrant came in.&lt;/p></description></item><item><title>Introducing Qdrant Cloud Inference</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-cloud-inference-launch/</link><pubDate>Tue, 15 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-cloud-inference-launch/</guid><description>&lt;h1 id="introducing-qdrant-cloud-inference">Introducing Qdrant Cloud Inference&lt;/h1>
&lt;p>Today, we’re announcing the launch of Qdrant Cloud Inference (&lt;a href="https://cloud.qdrant.io/" target="_blank" rel="noopener nofollow">get started in your cluster&lt;/a>). With Qdrant Cloud Inference, users can generate, store and index embeddings in a single API call, turning unstructured text and images into search-ready vectors in a single environment. Directly integrating model inference into Qdrant Cloud removes the need for separate inference infrastructure, manual pipelines, and redundant data transfers.&lt;/p>
&lt;p>This simplifies workflows, accelerates development cycles, and eliminates unnecessary network hops for developers. With a single API call, you can now embed, store, and index your data more quickly and more simply. This speeds up application development for RAG, Multimodal, Hybrid search, and more.&lt;/p></description></item><item><title>Announcing Vector Space Day 2025 in Berlin</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-space-day-2025/</link><pubDate>Mon, 14 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-space-day-2025/</guid><description>&lt;h2 id="vector-space-day-2025-powered-by-qdrant">Vector Space Day 2025: Powered by Qdrant&lt;/h2>
&lt;p>📍 Colosseum Berlin, Germany&lt;br>
🗓️ Friday, September 26, 2025&lt;/p>
&lt;h3 id="about">About&lt;/h3>
&lt;p>We’re hosting our first-ever full-day in-person &lt;a href="https://lu.ma/p7w9uqtz" target="_blank" rel="noopener nofollow">&lt;strong>Vector Space Day&lt;/strong>&lt;/a> this September in Berlin, and you’re invited.&lt;/p>
&lt;p>The Vector Space Day will bring together engineers, researchers, and AI builders to explore the cutting edge of retrieval, vector search infrastructure, and agentic AI. From building scalable RAG pipelines to enabling real-time AI memory and next-gen context engineering, we’re covering the full spectrum of modern vector-native search.&lt;/p></description></item><item><title>How Pento modeled aesthetic taste with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pento/</link><pubDate>Mon, 14 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pento/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pento/pento-bento-box-dark.jpg" alt="pento bento box">&lt;/p>
&lt;h1 id="bringing-people-together-through-qdrant">Bringing People Together Through Qdrant&lt;/h1>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pento/pento-cover-image.png" alt="pento-cover-image">&lt;/p>
&lt;h2 id="taste-in-art-isnt-just-a-preference-its-a-fingerprint">&lt;em>Taste in art isn’t just a preference; it’s a fingerprint.&lt;/em>&lt;/h2>
&lt;p>Imagine you&amp;rsquo;re an artist or art enthusiast searching not for a painting, but for people who share your unique taste, someone who resonates with surrealist colors just as deeply as you, or who finds quiet joy in minimalist lines. How would a system know who those people are? Traditional recommenders often suggest what’s trending or popular, or just can&amp;rsquo;t understand the nuances of art.&lt;/p></description></item><item><title>How Alhena AI unified its AI stack and improved ecommerce conversions with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-alhena/</link><pubDate>Thu, 10 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-alhena/</guid><description>&lt;h1 id="how-alhena-ai-unified-its-ai-stack-and-accelerated-ecommerce-outcomes-with-qdrant">How Alhena AI unified its AI stack and accelerated ecommerce outcomes with Qdrant&lt;/h1>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-alhena/alhena-bento-box-dark.jpg" alt="How Alhena AI unified its AI stack and improved ecommerce conversions with Qdrant">&lt;/p>
&lt;h2 id="building-ai-agents-that-drive-both-revenue-and-support-outcomes">Building AI agents that drive both revenue and support outcomes&lt;/h2>
&lt;p>&lt;a href="https://alhena.ai/" target="_blank">Alhena AI&lt;/a> is redefining the ecommerce experience through intelligent agents that assist customers before and after a purchase. On the front end, these agents help users find the perfect product based on nuanced preferences. On the back end, they resolve complex support queries without escalating to a human.&lt;/p></description></item><item><title>How GoodData turbocharged AI analytics with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-gooddata/</link><pubDate>Wed, 09 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-gooddata/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-gooddata/gooddata-bento-box-dark.jpg" alt="Gooddata Overview">&lt;/p>
&lt;h3 id="gooddatas-evolution-into-ai-powered-analytics">GoodData&amp;rsquo;s Evolution into AI-Powered Analytics&lt;/h3>
&lt;p>AI is redefining how people interact with data, pushing analytics platforms beyond static dashboards toward intelligent, conversational experiences. While traditionally recognized as a powerful BI platform, GoodData is laser-focused on accelerating both &amp;rsquo;time to insight&amp;rsquo; and &amp;rsquo;time to solution&amp;rsquo; by enhancing productivity for analysts and business users alike.&lt;/p>
&lt;p>What sets GoodData apart is its unique position in the market: a composable, API-first platform designed for teams that build data products, not just consume them. With deep support for white-labeled analytics, embedded use cases, and governed self-service at scale, GoodData delivers the flexibility modern organizations need. With AI being integrated across every layer of the platform, GoodData is helping their over 140,000 end customers move from traditional BI to intelligent, real-time decision-making.&lt;/p></description></item><item><title>The Hitchhiker's Guide to Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hitchhikers-guide/</link><pubDate>Wed, 09 Jul 2025 00:00:00 +0200</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hitchhikers-guide/</guid><description>&lt;blockquote>
&lt;p>From lecture halls to production pipelines, &lt;a href="https://qdrant.tech/stars/" target="_blank" rel="noopener nofollow">Qdrant Stars&lt;/a> &amp;ndash; founders, mentors and open-source contributors &amp;ndash; share how they’re building with vectors in the wild.&lt;br>
In this post, Clelia distils tips from her talk at the &lt;a href="https://lu.ma/based_meetup" target="_blank" rel="noopener nofollow">“Bavaria, Advancements in SEarch Development” meetup&lt;/a>, where she covered hard-won lessons from her extensive open-source building.&lt;/p>
&lt;/blockquote>
&lt;p>&lt;em>Hey there, vector space astronauts!&lt;/em>&lt;/p>
&lt;p>&lt;em>I am Clelia, an Open Source Engineer at &lt;a href="https://www.llamaindex.ai/" target="_blank" rel="noopener nofollow">LlamaIndex&lt;/a>. In the last two years, I&amp;rsquo;ve dedicated myself to the AI space, building (and breaking) many things, and sometimes even deploying them to production!&lt;/em>&lt;/p></description></item><item><title>How FAZ unlocked 75 years of journalism with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-faz/</link><pubDate>Thu, 03 Jul 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-faz/</guid><description>&lt;h1 id="how-faz-built-a-hybrid-search-engine-with-qdrant-to-unlock-75-years-of-journalism">How FAZ Built a Hybrid Search Engine with Qdrant to Unlock 75 Years of Journalism&lt;/h1>
&lt;p>&lt;a href="https://www.frankfurterallgemeine.de/die-faz" target="_blank" rel="noopener nofollow">Frankfurter Allgemeine Zeitung (FAZ)&lt;/a>, a major national newspaper in Germany, has spent decades building a rich archive of journalistic content, stretching back to 1949. The FAZ archive has long built expertise in making its extensive collection of over 75 years accessible and searchable for both internal and external customers through keyword- and index-based search engines. New AI-powered search technologies were therefore immediately recognized as an opportunity to unlock the potential of the comprehensive archive in entirely new ways and to systematically address the limitations of traditional search methods. The solution they arrived at involved a thoughtful orchestration of technologies - with Qdrant at the heart.&lt;/p></description></item><item><title>GraphRAG: How Lettria Unlocked 20% Accuracy Gains with Qdrant and Neo4j</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lettria-v2/</link><pubDate>Tue, 17 Jun 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lettria-v2/</guid><description>&lt;h1 id="scaled-vector--graph-retrieval-how-lettria-unlocked-20-accuracy-gains-with-qdrant--neo4j">Scaled Vector &amp;amp; Graph Retrieval: How Lettria Unlocked 20% Accuracy Gains with Qdrant &amp;amp; Neo4j&lt;/h1>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lettria/lettria-bento-dark.jpg" alt="Lettria increases accuracy by 20% by blending Qdrant&amp;rsquo;s vector search and Neo4j&amp;rsquo;s knowledge graphs">&lt;/p>
&lt;h2 id="why-complex-document-intelligence-needs-more-than-just-vector-search">Why Complex Document Intelligence Needs More Than Just Vector Search&lt;/h2>
&lt;p>In regulated industries where precision, auditability, and accuracy are paramount, leveraging Large Language Models (LLMs) effectively often requires going beyond traditional Retrieval-Augmented Generation (RAG). &lt;a href="https://www.lettria.com/" target="_blank" rel="noopener nofollow">Lettria&lt;/a>, a leader in document intelligence platforms, recognized that complex, highly regulated data sets like pharmaceutical research, legal compliance, and aerospace documentation demanded superior accuracy and more explainable outputs than vector-only RAG systems could provide. To achieve the expected level of performance, the team has focused its effort on building a very robust document parsing engine designed for complex pdf (with tables, diagrams, charts etc.), an automatic ontology builder and an ingestion pipeline covering vectors and graph enrichment&lt;/p></description></item><item><title>Vector Data Migration Tool</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/beta-database-migration-tool/</link><pubDate>Mon, 16 Jun 2025 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/beta-database-migration-tool/</guid><description>&lt;h2 id="migrating-your-data-just-got-easier">Migrating your data just got easier&lt;/h2>
&lt;p>We’ve launched the &lt;strong>beta&lt;/strong> of our Qdrant &lt;strong>Vector Data Migration Tool&lt;/strong>, designed to simplify moving data between different instances, whether you&amp;rsquo;re migrating between Qdrant deployments or switching from other vector database providers.&lt;/p>
&lt;p>This powerful tool streams all vectors from a source collection to a target Qdrant instance in live batches. It supports migrations from one Qdrant deployment to another, including from open source to Qdrant Cloud or between cloud regions. But that&amp;rsquo;s not all. You can also migrate your data from other vector databases directly into Qdrant. All with a single command.&lt;/p></description></item><item><title>How Lawme Scaled AI Legal Assistants and Significantly Cut Costs with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lawme/</link><pubDate>Wed, 11 Jun 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lawme/</guid><description>&lt;h2 id="how-lawme-scaled-ai-legal-assistants-and-cut-costs-by-75-with-qdrant">How Lawme Scaled AI Legal Assistants and Cut Costs by 75% with Qdrant&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lawme/lawme-bento-dark.jpg" alt="How Lawme Scaled AI Legal Assistants and Cut Costs 75% with Qdrant">&lt;/p>
&lt;p>Legal technology (LegalTech) is at the forefront of digital transformation in the traditionally conservative legal industry. Lawme.ai, an ambitious startup, is pioneering this transformation by automating routine legal workflows with AI assistants. By leveraging sophisticated AI-driven processes, Lawme empowers law firms to dramatically accelerate legal document preparation, from initial research and analysis to comprehensive drafting. However, scaling their solution presented formidable challenges, particularly around data management, compliance, and operational costs.&lt;/p></description></item><item><title>How ConvoSearch Boosted Revenue for D2C Brands with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-convosearch/</link><pubDate>Tue, 10 Jun 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-convosearch/</guid><description>&lt;h2 id="how-convosearch-boosted-e-commerce-revenue-with-qdrant">How ConvoSearch Boosted E-commerce Revenue with Qdrant&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-convosearch/convosearch-bento-dark.jpg" alt="How ConvoSearch Boosted E-commerce Revenue with Qdrant">&lt;/p>
&lt;h3 id="driving-e-commerce-success-through-enhanced-search">Driving E-commerce Success Through Enhanced Search&lt;/h3>
&lt;p>E-commerce retailers face intense competition and constant pressure to increase conversion rates. &lt;a href="https://convosearch.com/" target="_blank" rel="noopener nofollow">ConvoSearch&lt;/a> , an AI-powered recommendation engine tailored for direct-to-consumer (D2C) e-commerce brands, addresses these challenges by delivering hyper-personalized search and recommendations. With customers like The Closet Lover and Uncle Reco achieving dramatic revenue increases, ConvoSearch relies heavily on high-speed vector search to ensure relevance and accuracy at scale.&lt;/p></description></item><item><title>LegalTech Builder's Guide: Navigating Strategic Decisions with Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/legal-tech-builders-guide/</link><pubDate>Tue, 10 Jun 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/legal-tech-builders-guide/</guid><description>&lt;h2 id="legaltech-builders-guide-navigating-strategic-decisions-with-vector-search">LegalTech Builder&amp;rsquo;s Guide: Navigating Strategic Decisions with Vector Search&lt;/h2>
&lt;h3 id="legaltech-innovation-needs-a-new-search-stack">LegalTech innovation needs a new search stack&lt;/h3>
&lt;p>LegalTech applications, more than most other application types, demand accuracy due to complex document structures, high regulatory stakes, and compliance requirements. Traditional keyword searches often fall short, failing to grasp semantic nuances essential for precise legal queries. &lt;a href="https://qdrant.tech/" target="_blank" rel="noopener nofollow">Qdrant&lt;/a> addresses these challenges by providing robust vector search solutions tailored for the complexities inherent in LegalTech applications.&lt;/p></description></item><item><title>Qdrant Achieves SOC 2 Type II and HIPAA Certifications</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/soc-2-type-ii-hipaa/</link><pubDate>Tue, 10 Jun 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/soc-2-type-ii-hipaa/</guid><description>&lt;h2 id="qdrant-attains-soc-2-type-ii-and-hipaa-certifications-strengthening-our-commitment-to-enterprise-security">Qdrant Attains SOC 2 Type II and HIPAA Certifications: Strengthening Our Commitment to Enterprise Security&lt;/h2>
&lt;p>At Qdrant, we&amp;rsquo;re proud to announce that we&amp;rsquo;ve successfully renewed our SOC 2 Type II certification and attained our HIPAA compliance certification (&lt;a href="http://qdrant.to/trust-center" target="_blank" rel="noopener nofollow">link&lt;/a>). This continued achievement highlights our unwavering dedication to maintaining robust security, confidentiality, and compliance standards, especially critical in supporting &lt;a href="https://qdrant.tech/enterprise-solutions/" target="_blank" rel="noopener nofollow">enterprise-scale operations&lt;/a> and sensitive data management.&lt;/p>
&lt;h3 id="soc-2-type-ii-continuous-commitment-to-security">SOC 2 Type II: Continuous Commitment to Security&lt;/h3>
&lt;p>Building on our initial SOC 2 Type II certification from 2024, Qdrant sustained our rigorous security and operational practices over a full 12-month observation period. SOC 2 Type II audits meticulously assess the practical implementation of security measures aligned with the American Institute of Certified Public Accountants (AICPA) Trust Services criteria:&lt;/p></description></item><item><title>​​Introducing the Official Qdrant Node for n8n</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/n8n-node/</link><pubDate>Mon, 09 Jun 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/n8n-node/</guid><description>&lt;h2 id="introducing-the-official-qdrant-node-for-n8n">​​Introducing the Official Qdrant Node for n8n&lt;/h2>
&lt;p>Amazing news for n8n builders working with semantic search: Qdrant now has an &lt;a href="https://www.npmjs.com/package/n8n-nodes-qdrant" target="_blank" rel="noopener nofollow">official, team-supported node for n8n&lt;/a>, an early adopter of n8n&amp;rsquo;s new &lt;a href="https://docs.n8n.io/integrations/creating-nodes/deploy/submit-community-nodes/#submit-your-node-for-verification-by-n8n" target="_blank" rel="noopener nofollow">verified community nodes&lt;/a> feature!&lt;/p>
&lt;p>This new integration brings the full power of Qdrant directly into your n8n workflows: no more wrestling with HTTP nodes ever again!
Whether you’re building RAG systems, agentic pipelines, or advanced data analysis tools, this node is designed to make your life easier and your solutions more robust.&lt;/p></description></item><item><title>Qdrant + DataTalks.Club: Free 10-Week Course on LLM Applications</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/datatalks-course/</link><pubDate>Thu, 05 Jun 2025 23:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/datatalks-course/</guid><description>&lt;p>Want to learn how to build an AI system that answers questions about your knowledge base?&lt;/p>
&lt;p>We’re excited to announce our partnership with Alexey Grigorev and DataTalks.Club to bring you a free, hands-on, 10-week course focused on building real-life applications of LLMs.&lt;/p>
&lt;p>Gain hands-on experience with LLMs, RAG, vector search, evaluation, monitoring, and more.&lt;/p>
&lt;h2 id="learn-rag-and-vector-search">Learn RAG and Vector Search&lt;/h2>
&lt;p>In this course, you&amp;rsquo;ll learn how to create an AI system that can answer questions about your own knowledge base using LLMs and RAG.&lt;/p></description></item><item><title>How Qovery Accelerated Developer Autonomy with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-qovery/</link><pubDate>Tue, 27 May 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-qovery/</guid><description>&lt;h2 id="qovery-scales-real-time-devops-automation-with-qdrant">Qovery Scales Real-Time DevOps Automation with Qdrant&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-qovery/case-study-qovery-summary-dark.png" alt="How Qovery Accelerated Developer Autonomy with Qdrant">&lt;/p>
&lt;h3 id="empowering-developers-with-autonomous-infrastructure-management">Empowering Developers with Autonomous Infrastructure Management&lt;/h3>
&lt;p>Qovery, trusted by over 200 companies including Alan, Talkspace, GetSafe, and RxVantage, empowers software engineering teams to autonomously manage their infrastructure through its robust DevOps automation platform. As their platform evolved, Qovery recognized an opportunity to enhance developer autonomy further by integrating an AI-powered DevOps Copilot. To achieve real-time accuracy and rapid responses, Qovery selected Qdrant as the backbone of their vector database infrastructure.&lt;/p></description></item><item><title>How Tripadvisor Drives 2 to 3x More Revenue with Qdrant-Powered AI</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-tripadvisor/</link><pubDate>Tue, 13 May 2025 23:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-tripadvisor/</guid><description>&lt;h1 id="how-tripadvisor-is-reimagining-travel-with-qdrant">How Tripadvisor Is Reimagining Travel with Qdrant&lt;/h1>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-tripadvisor/case-study-tripadvisor-summary-dark.jpg" alt="How Tripadvisor Drives 2–3x More Revenue with Qdrant-Powered AI">&lt;/p>
&lt;p>Tripadvisor, the world’s largest travel guidance platform, is undergoing a deep transformation. With hundreds of millions of monthly users and over a billion reviews and contributions, it holds one of the richest datasets in the travel industry. And until recently, that data, particularly its unstructured content, had incredible untapped potential. Now, with the rise of generative AI and the adoption of tools like Qdrant’s vector database, Tripadvisor is unlocking its full potential to deliver intelligent, personalized, and high-impact travel experiences.&lt;/p></description></item><item><title>Precision at Scale: How Aracor Accelerated Legal Due Diligence with Hybrid Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-aracor/</link><pubDate>Tue, 13 May 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-aracor/</guid><description>&lt;h2 id="precision-at-scale-how-aracor-uses-qdrant-to-accelerate-legal-due-diligence-resulting-in-90-faster-workflows">Precision at Scale: How Aracor Uses Qdrant to Accelerate Legal Due Diligence Resulting in 90% Faster Workflows&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-aracor/case-study-aracor-bento-dark.jpg" alt="How Aracor Sped Up Due Diligence Workflows by 90%">&lt;/p>
&lt;h3 id="how-aracor-accelerated-legal-due-diligence-with-qdrant-vector-search">How Aracor Accelerated Legal Due Diligence with Qdrant Vector Search&lt;/h3>
&lt;p>The world of mergers and acquisitions (M&amp;amp;A) is notoriously painstaking, slow, expensive and error-prone. Lawyers spend weeks combing through thousands of documents—validating signatures, comparing versions, and flagging risks.&lt;/p>
&lt;p>Lawyers and dealmakers sift through mountains of documents—often numbering into the thousands—to validate every detail, from validating signatures, comparing the documents involved in the deal transaction, flagging risks to to patent validity. This meticulous process typically drains weeks or even months of productivity from highly trained professionals. &lt;a href="https://aracor.ai/" target="_blank" rel="noopener nofollow">Aracor AI&lt;/a> set out to change that and to solve the M&amp;amp;A transparency gap. The Miami-based AI platform is laser-focused on transforming this painstaking due diligence into an automated, accurate, and dramatically faster operation.&lt;/p></description></item><item><title>How Garden Scaled Patent Intelligence with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-garden-intel/</link><pubDate>Fri, 09 May 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-garden-intel/</guid><description>&lt;h2 id="garden-accelerates-patent-intelligence-with-qdrants-filterable-vector-search">Garden Accelerates Patent Intelligence with Qdrant’s Filterable Vector Search&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-garden/case-study-garden-bento-dark.jpg" alt="How Garden Unlocked AI Patent Analysis">&lt;/p>
&lt;p>For more than a century, patent litigation has been a slow, people-powered business. Analysts read page after page—sometimes tens of thousands of pages—hunting for the smoking-gun paragraph that proves infringement or invalidity. Garden, a New York-based startup, set out to change that by applying large-scale AI to the entire global patent corpus—more than 200 million patents—in conjunction with terabytes of real world data.&lt;/p></description></item><item><title>Exploring Qdrant Cloud Just Got Easier</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/product-ui-changes/</link><pubDate>Tue, 06 May 2025 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/product-ui-changes/</guid><description>&lt;h1 id="exploring-qdrant-cloud-just-got-easier">Exploring Qdrant Cloud just got easier&lt;/h1>
&lt;p>We always aim to simplify our product for developers, platform teams, and enterprises.&lt;/p>
&lt;p>Here’s a quick overview of recent improvements designed to simplify your journey from login, creating your first cluster, prototyping, and going to production.&lt;/p>
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/J75pNicPEo8?si=1HznwER1Kqx5ZrLG" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen>&lt;/iframe>
&lt;h2 id="simplified-login">Simplified Login&lt;/h2>
&lt;p>We&amp;rsquo;ve reduced the steps to create and access your account, and also simplified navigation between login and registration.&lt;/p></description></item><item><title>How Pariti Doubled Its Fill Rate with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pariti/</link><pubDate>Thu, 01 May 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pariti/</guid><description>&lt;h2 id="from-manual-bottlenecks-to-millisecond-matching-connecting-africas-best-talent">From Manual Bottlenecks to Millisecond Matching: Connecting Africa’s Best Talent&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pariti/case-study-pariti-summary-dark.jpg" alt="Pariti slashes vetting time and boosted candidate placement success.">&lt;/p>
&lt;p>Pariti’s mission is bold: connect Africa’s best talent with the continent’s most-promising startups—fast. Its referral-driven marketplace lets anyone nominate a great candidate, but viral growth triggered an avalanche of data. A single job post now attracts more than 300 applicants within 72 hours, yet Pariti still promises clients an interview-ready shortlist inside those same five days.&lt;/p></description></item><item><title>How Dust Scaled to 5,000+ Data Sources with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-dust-v2/</link><pubDate>Tue, 29 Apr 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-dust-v2/</guid><description>&lt;h2 id="inside-dusts-vector-stack-overhaul-scaling-to-5000-data-sources-with-qdrant">Inside Dust’s Vector Stack Overhaul: Scaling to 5,000+ Data Sources with Qdrant&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-dust-v2/case-study-dust-v2-v2-bento-dark.jpg" alt="How Dust Scaled to 5,000+ Data Sources with Qdrant">&lt;/p>
&lt;h3 id="the-challenge-scaling-ai-infrastructure-for-thousands-of-data-sources">The Challenge: Scaling AI Infrastructure for Thousands of Data Sources&lt;/h3>
&lt;p>Dust, an OS for AI-native companies enabling users to build AI agents powered by actions and company knowledge, faced a set of growing technical hurdles as it scaled its operations. The company&amp;rsquo;s core product enables users to give AI agents secure access to internal and external data resources, enabling enhanced workflows and faster access to information. However, this mission hit bottlenecks when their infrastructure began to strain under the weight of thousands of data sources and increasingly demanding user queries.&lt;/p></description></item><item><title>How SayOne Enhanced Government AI Services with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-sayone/</link><pubDate>Mon, 28 Apr 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-sayone/</guid><description>&lt;h2 id="how-sayone-enhanced-government-ai-services-with-qdrant">How SayOne Enhanced Government AI Services with Qdrant&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-sayone/case-study-sayone-summary-dark.jpg" alt="SayOne Enhanced Government AI Services">&lt;/p>
&lt;h3 id="the-challenge">The Challenge&lt;/h3>
&lt;p>SayOne is an information technology and digital services company headquartered in India. They create end-to-end customized digital solutions, and have completed over 200 projects for clients worldwide. When SayOne embarked on building advanced AI solutions for government institutions, their initial choice was Pinecone, primarily due to its prevalence within AI documentation. However, SayOne soon discovered significant limitations impacting their projects. Key challenges included escalating costs, restrictive customization options, and considerable scalability issues. Furthermore, reliance on external cloud infrastructure posed critical data privacy concerns, especially since governmental entities demanded stringent data sovereignty and privacy controls.&lt;/p></description></item><item><title>Beyond Multimodal Vectors: Hotel Search With Superlinked and Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/superlinked-multimodal-search/</link><pubDate>Thu, 24 Apr 2025 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/superlinked-multimodal-search/</guid><description>&lt;h2 id="more-than-just-multimodal-search">More Than Just Multimodal Search?&lt;/h2>
&lt;p>AI has transformed how we find products, services, and content. Now users express needs in &lt;strong>natural language&lt;/strong> and expect precise, tailored results.&lt;/p>
&lt;p>For example, you might search for hotels in Paris with specific criteria:&lt;/p>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/superlinked-multimodal-search/superlinked-search.png" alt="superlinked-search">&lt;/p>
&lt;p>&lt;em>&amp;ldquo;Affordable luxury hotels near Eiffel Tower with lots of good reviews and free parking.&amp;rdquo;&lt;/em> This isn&amp;rsquo;t just a search query—it&amp;rsquo;s a complex set of interrelated preferences spanning multiple data types.&lt;/p></description></item><item><title>Qdrant 1.14 - Reranking Support &amp; Extensive Resource Optimizations</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.14.x/</link><pubDate>Tue, 22 Apr 2025 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.14.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.14.0" target="_blank" rel="noopener nofollow">&lt;strong>Qdrant 1.14.0 is out!&lt;/strong>&lt;/a> Let&amp;rsquo;s look at the main features for this version:&lt;/p>
&lt;p>&lt;strong>Score-Boosting Reranker:&lt;/strong> Blend vector similarity with custom rules and context.&lt;/br>
&lt;strong>Improved Resource Utilization:&lt;/strong> CPU and disk IO optimization for faster processing.&lt;/br>&lt;/p>
&lt;p>&lt;strong>Incremental HNSW Indexing:&lt;/strong> Build indexes gradually as data arrives.&lt;/br>
&lt;strong>Batch Search:&lt;/strong> Optimized parallel processing for batch queries.&lt;/br>&lt;/p>
&lt;p>&lt;strong>Memory Optimization:&lt;/strong> Reduced usage for large datasets with improved ID tracking.&lt;/br>&lt;/p>
&lt;h2 id="score-boosting-reranker">Score-Boosting Reranker&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.14.x/reranking.jpg" alt="reranking">&lt;/p>
&lt;p>When integrating vector search into specific applications, you can now tweak the final result list using domain or business logic. For example, if you are building a &lt;strong>chatbot or search on website content&lt;/strong>, you can rank results with &lt;code>title&lt;/code> metadata higher than &lt;code>body_text&lt;/code> in your results.&lt;/p></description></item><item><title>Pathwork Optimizes Life Insurance Underwriting with Precision Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pathwork/</link><pubDate>Tue, 22 Apr 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pathwork/</guid><description>&lt;h2 id="pathwork-optimizes-life-insurance-underwriting-with-precision-vector-search">&lt;strong>Pathwork Optimizes Life Insurance Underwriting with Precision Vector Search&lt;/strong>&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pathwork/case-study-pathwork-summary-dark-b.jpg" alt="Pathwork Optimizes Life Insurance Underwriting with Precision Vector Search">&lt;/p>
&lt;h3 id="about-pathwork">&lt;strong>About Pathwork&lt;/strong>&lt;/h3>
&lt;p>Pathwork is redesigning life and health insurance workflows for the age of AI. Brokerages and insurance carriers utilize Pathwork&amp;rsquo;s advanced agentic system to automate their underwriting processes and enhance back-office sales operations. Pathwork&amp;rsquo;s solution drastically reduces errors, completes tasks up to 70 times faster, and significantly conserves human capital.&lt;/p>
&lt;h3 id="the-challenge-accuracy-above-all">&lt;strong>The Challenge: Accuracy Above All&lt;/strong>&lt;/h3>
&lt;p>Life insurance underwriting demands exceptional accuracy. Traditionally, underwriting involves extensive manual input, subjective judgment, and frequent errors. These errors, such as misclassifying risk based on incomplete or misunderstood health data, often result in lost sales and customer dissatisfaction due to sudden premium changes.&lt;/p></description></item><item><title>How Lyzr Supercharged AI Agent Performance with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lyzr/</link><pubDate>Tue, 15 Apr 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lyzr/</guid><description>&lt;h1 id="how-lyzr-supercharged-ai-agent-performance-with-qdrant">How Lyzr Supercharged AI Agent Performance with Qdrant&lt;/h1>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-lyzr/case-study-lyzr-summary-dark.png" alt="How Lyzr Supercharged AI Agent Performance with Qdrant">&lt;/p>
&lt;h2 id="scaling-intelligent-agents-how-lyzr-supercharged-performance-with-qdrant">Scaling Intelligent Agents: How Lyzr Supercharged Performance with Qdrant&lt;/h2>
&lt;p>As AI agents become more capable and pervasive, the infrastructure behind them must evolve to handle rising concurrency, low-latency demands, and ever-growing knowledge bases. At Lyzr Agent Studio—where over 100 agents are deployed across industries—these challenges arrived quickly and at scale.&lt;/p>
&lt;p>When their existing vector database infrastructure began to buckle under pressure, the engineering team needed a solution that could do more than just keep up. It had to accelerate them forward.&lt;/p></description></item><item><title>How Mixpeek Uses Qdrant for Efficient Multimodal Feature Stores</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-mixpeek/</link><pubDate>Tue, 08 Apr 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-mixpeek/</guid><description>&lt;h1 id="how-mixpeek-uses-qdrant-for-efficient-multimodal-feature-stores">How Mixpeek Uses Qdrant for Efficient Multimodal Feature Stores&lt;/h1>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-mixpeek/Case-Study-Mixpeek-Summary-Dark.jpg" alt="How Mixpeek Uses Qdrant for Efficient Multimodal Feature Stores">&lt;/p>
&lt;h2 id="about-mixpeek">About Mixpeek&lt;/h2>
&lt;p>&lt;a href="http://mixpeek.com" target="_blank" rel="noopener nofollow">Mixpeek&lt;/a> is a multimodal data processing and retrieval platform designed for developers and data teams. Founded by Ethan Steininger, a former MongoDB search specialist, Mixpeek enables efficient ingestion, feature extraction, and retrieval across diverse media types including video, images, audio, and text.&lt;/p>
&lt;h2 id="the-challenge-optimizing-feature-stores-for-complex-retrievers">The Challenge: Optimizing Feature Stores for Complex Retrievers&lt;/h2>
&lt;p>As Mixpeek&amp;rsquo;s multimodal data warehouse evolved, their feature stores needed to support increasingly complex retrieval patterns. Initially using MongoDB Atlas&amp;rsquo;s vector search, they encountered limitations when implementing &lt;a href="https://docs.mixpeek.com/retrieval/retrievers" target="_blank" rel="noopener nofollow">&lt;strong>hybrid retrievers&lt;/strong>&lt;/a> &lt;strong>combining dense and sparse vectors with metadata pre-filtering&lt;/strong>.&lt;/p></description></item><item><title>Satellite Vector Broadcasting: Near-Zero Latency Retrieval from Space</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/satellite-vector-broadcasting/</link><pubDate>Tue, 01 Apr 2025 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/satellite-vector-broadcasting/</guid><description>&lt;h2 id="-qdrant-launches-satellite-vector-broadcasting-for-near-zero-latency-retrieval">📡 Qdrant Launches Satellite Vector Broadcasting for Near-Zero Latency Retrieval&lt;/h2>
&lt;p>&lt;strong>CAPE CANAVERAL, FL&lt;/strong> — Qdrant today announced the successful deployment of &lt;strong>Satellite Vector Broadcasting&lt;/strong>, an ambitious new system for high-speed vector search that uses &lt;strong>actual satellites&lt;/strong> to transmit, shard, and retrieve embeddings — bypassing Earth entirely.&lt;/p>
&lt;blockquote>
&lt;p>“Cloud is old news. Space is the new infrastructure,” said orbital software lead Luna Hertz. “We&amp;rsquo;re proud to say we&amp;rsquo;ve finally untethered cosine similarity from the bonds of gravity and Wi-Fi.”&lt;/p></description></item><item><title>HubSpot &amp; Qdrant: Scaling an Intelligent AI Assistant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-hubspot/</link><pubDate>Mon, 24 Mar 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-hubspot/</guid><description>&lt;p>HubSpot, a global leader in CRM solutions, continuously enhances its product suite with powerful AI-driven features. To optimize Breeze AI, its flagship intelligent assistant, HubSpot chose Qdrant as its vector database.&lt;/p>
&lt;h2 id="challenges-scaling-an-intelligent-ai">&lt;strong>Challenges Scaling an Intelligent AI&lt;/strong>&lt;/h2>
&lt;p>As HubSpot expanded its AI capabilities, it faced several critical challenges in scaling Breeze AI to meet growing user demands:&lt;/p>
&lt;ul>
&lt;li>Delivering highly personalized, context-aware responses required a robust vector search solution that could retrieve data quickly while maintaining accuracy.&lt;/li>
&lt;li>With increasing user interactions, HubSpot needed a scalable system capable of handling rapid data growth without performance degradation.&lt;/li>
&lt;li>Integration with HubSpot’s existing AI infrastructure had to be swift and easy to support fast-paced development cycles.&lt;/li>
&lt;li>HubSpot sought a future-proof vector search solution that could adapt to emerging AI advancements while maintaining high availability.&lt;/li>
&lt;/ul>
&lt;p>These challenges made it essential to find a high-performance, developer-friendly vector database that could power Breeze AI efficiently.&lt;/p></description></item><item><title>Vibe Coding RAG with our MCP server</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/webinar-vibe-coding-rag/</link><pubDate>Fri, 21 Mar 2025 12:02:00 +0100</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/webinar-vibe-coding-rag/</guid><description>&lt;p>Another month means another webinar! This time &lt;a href="https://www.linkedin.com/in/kacperlukawski/" target="_blank" rel="noopener nofollow">Kacper Łukawski&lt;/a> put some of the popular AI coding agents to the
test. There is a lot of excitement around tools such as Cursor, GitHub Copilot, Aider and Claude Code, so we wanted to
see how they perform in implementing something more complex than a simple frontend application. Wouldn&amp;rsquo;t it be awesome
if LLMs could code Retrieval Augmented Generation on their own?&lt;/p>
&lt;h2 id="vibe-coding">Vibe coding&lt;/h2>
&lt;p>&lt;strong>Vibe coding&lt;/strong> is a development approach introduced by Andrej Karpathy where developers surrender to intuition rather
than control. It leverages AI coding assistants for implementation while developers focus on outcomes. Through voice
interfaces and complete trust in AI suggestions, the process prioritizes results over code comprehension.&lt;/p></description></item><item><title>How Deutsche Telekom Built a Multi-Agent Enterprise Platform Leveraging Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-deutsche-telekom/</link><pubDate>Fri, 07 Mar 2025 08:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-deutsche-telekom/</guid><description>&lt;p>&lt;strong>How Deutsche Telekom Built a Scalable, Multi-Agent Enterprise Platform Leveraging Qdrant—Powering Over 2 Million Conversations Across Europe&lt;/strong>&lt;/p>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-deutsche-telekom/dtag-team.jpg" alt="Deutsche Telekom&amp;rsquo;s AI Competence Center team leading the LMOS platform development">&lt;/p>
&lt;p>&lt;a href="https://www.linkedin.com/in/arun-joseph-ab47102a/" target="_blank" rel="noopener nofollow">Arun Joseph&lt;/a>, who leads engineering and architecture for &lt;a href="https://www.telekom.com/en/company/digital-responsibility/details/artificial-intelligence-at-deutsche-telekom-1055154" target="_blank" rel="noopener nofollow">Deutsche Telekom&amp;rsquo;s AI Competence Center (AICC)&lt;/a>, faced a critical challenge: how do you efficiently and scalably deploy AI-powered assistants across a vast enterprise ecosystem? The goal was to deploy GenAI for customer sales and service operations to resolve customer queries faster across the 10 countries where Deutsche Telekom operates in Europe.&lt;/p></description></item><item><title>Introducing Qdrant Cloud’s New Enterprise-Ready Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/enterprise-vector-search/</link><pubDate>Tue, 04 Mar 2025 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/enterprise-vector-search/</guid><description>&lt;p>At Qdrant, we enable developers to power AI workloads - not only securely, but at any scale. That’s why we are excited to introduce Qdrant Cloud’s new suite of enterprise-grade features. With &lt;strong>our Cloud API, Cloud RBAC&lt;/strong>, &lt;strong>Single Sign-On (SSO)&lt;/strong>, granular &lt;strong>Database API Keys&lt;/strong>, and &lt;strong>Advanced Monitoring &amp;amp; Observability&lt;/strong>, you now have the control and visibility needed to operate at scale.&lt;/p>
&lt;h2 id="securely-scale-your-ai-workloads">Securely Scale Your AI Workloads&lt;/h2>
&lt;p>Your enterprise-grade AI applications demand more than just a powerful vector database—they need to meet compliance, performance, and scalability requirements. To do that, you need simplified management, secure access &amp;amp; authentication, and real-time monitoring &amp;amp; observability. Now, Qdrant’s new enterprise-grade features address these needs, giving your team the tools to reduce operational overhead, simplify authentication, enforce access policies, and have deep visibility into performance.&lt;/p></description></item><item><title>Metadata automation and optimization - Reece Griffiths | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/metadata-deasy-labs/</link><pubDate>Mon, 24 Feb 2025 18:29:51 -0300</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/metadata-deasy-labs/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;Metadata is one of the key unlocks to both segmentation and file organization, setting up the right knowledge base, and enriching it to hit that last mile of accuracy and speed.”&lt;/em>&lt;br>
&lt;strong>— Reece Griffiths&lt;/strong>&lt;/p>
&lt;/blockquote>
&lt;p>&lt;a href="https://www.linkedin.com/in/reece-william-griffiths/" target="_blank" rel="noopener nofollow">Reece Griffiths&lt;/a> is the CEO and co-founder of &lt;a href="https://www.deasylabs.com/" target="_blank" rel="noopener nofollow">Deasy Labs&lt;/a>, a metadata automation platform that helps companies optimize their vector databases for retrieval accuracy. Previously part of Y Combinator, Deasy Labs focuses on improving metadata extraction, classification, and enrichment at scale.&lt;/p></description></item><item><title>How to Build Intelligent Agentic RAG with CrewAI and Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/webinar-crewai-qdrant-obsidian/</link><pubDate>Fri, 24 Jan 2025 09:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/webinar-crewai-qdrant-obsidian/</guid><description>&lt;p>In a recent live session, we teamed up with &lt;a href="https://crewai.com/" target="_blank" rel="noopener nofollow">CrewAI&lt;/a>, a framework for building intelligent,
multi-agent applications. If you missed it, &lt;a href="https://www.linkedin.com/in/kacperlukawski/" target="_blank" rel="noopener nofollow">Kacper Łukawski&lt;/a> from Qdrant
and &lt;a href="https://www.linkedin.com/in/tonykipkemboi" target="_blank" rel="noopener nofollow">Tony Kipkemboi&lt;/a> from &lt;a href="https://crewai.com/" target="_blank" rel="noopener nofollow">CrewAI&lt;/a> gave an insightful
overview of CrewAI’s capabilities and demonstrated how to leverage Qdrant for creating an agentic RAG
(Retrieval-Augmented Generation) system. The focus was on semi-automating email communication, using
&lt;a href="https://obsidian.md/" target="_blank" rel="noopener nofollow">Obsidian&lt;/a> as the knowledge base.&lt;/p>
&lt;p>In this article, we’ll guide you through the process of setting up an AI-powered system that connects directly to your
email inbox and knowledge base, enabling it to analyze incoming messages and existing content to generate contextually
relevant response suggestions.&lt;/p></description></item><item><title>Qdrant 1.13 - GPU Indexing, Strict Mode &amp; New Storage Engine</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.13.x/</link><pubDate>Thu, 23 Jan 2025 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.13.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.13.0" target="_blank" rel="noopener nofollow">&lt;strong>Qdrant 1.13.0 is out!&lt;/strong>&lt;/a> Let&amp;rsquo;s look at the main features for this version:&lt;/p>
&lt;p>&lt;strong>GPU Accelerated Indexing:&lt;/strong> Fast HNSW indexing with architecture-free GPU support.&lt;/br>
&lt;strong>Strict Mode:&lt;/strong> Enforce operation restrictions on collections for enhanced control.&lt;/br>&lt;/p>
&lt;p>&lt;strong>HNSW Graph Compression:&lt;/strong> Reduce storage use via HNSW Delta Encoding.&lt;/br>
&lt;strong>Named Vector Filtering:&lt;/strong> New &lt;code>has_vector&lt;/code> filtering condition for named vectors.&lt;/br>
&lt;strong>Custom Storage:&lt;/strong> For constant-time reads/writes of payloads and sparse vectors.&lt;/br>&lt;/p>
&lt;h2 id="gpu-accelerated-indexing">GPU Accelerated Indexing&lt;/h2>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.13.x/image_6.png" alt="gpu-accelerated-indexing">&lt;/p>
&lt;p>We are making it easier for you to handle even &lt;strong>the most demanding workloads&lt;/strong>.&lt;/p></description></item><item><title>Voiceflow &amp; Qdrant: Powering No-Code AI Agent Creation with Scalable Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-voiceflow/</link><pubDate>Tue, 10 Dec 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-voiceflow/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-voiceflow/image1.png" alt="voiceflow/image2.png">&lt;/p>
&lt;p>&lt;a href="https://www.voiceflow.com/" target="_blank" rel="noopener nofollow">Voiceflow&lt;/a> enables enterprises to create AI agents in a no-code environment by designing workflows through a drag-and-drop interface. The platform allows developers to host and customize chatbot interfaces without needing to build their own RAG pipeline, working out of the box and being easily adaptable to specific use cases. “Powered by technologies like Natural Language Understanding (NLU), Large Language Models (LLM), and Qdrant as a vector search engine, Voiceflow serves a diverse range of customers, including enterprises that develop chatbots for internal and external AI use cases,” says &lt;a href="https://www.linkedin.com/in/xavierportillaedo/" target="_blank" rel="noopener nofollow">Xavier Portillo Edo&lt;/a>, Head of Cloud Infrastructure at Voiceflow.&lt;/p></description></item><item><title>Building a Facial Recognition System with Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/facial-recognition/</link><pubDate>Tue, 03 Dec 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/facial-recognition/</guid><description>&lt;h1 id="the-twin-celebrity-app">The Twin Celebrity App&lt;/h1>
&lt;p>In the era of personalization, combining cutting-edge technology with fun can create engaging applications that resonate with users. One such project is the &lt;a href="https://github.com/neural-maze/vector-twin" target="_blank" rel="noopener nofollow">&lt;strong>Twin Celebrity app&lt;/strong>&lt;/a>, a tool that matches users with their celebrity look-alikes using facial recognition embeddings and &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/advanced-search/">&lt;strong>vector search&lt;/strong>&lt;/a> powered by Qdrant. This blog post dives into the architecture, tools, and practical advice for developers who want to build this app—or something similar.&lt;/p>
&lt;p>The &lt;a href="https://github.com/neural-maze/vector-twin" target="_blank" rel="noopener nofollow">&lt;strong>Twin Celebrity app&lt;/strong>&lt;/a> identifies which celebrity a user resembles by analyzing a selfie. The app utilizes:&lt;/p></description></item><item><title>Optimizing ColPali for Retrieval at Scale, 13x Faster Results</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/colpali-qdrant-optimization/</link><pubDate>Wed, 27 Nov 2024 00:40:24 -0300</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/colpali-qdrant-optimization/</guid><description>&lt;p>ColPali is a fascinating leap in document retrieval. Its precision in handling visually rich PDFs is phenomenal, but scaling it to handle real-world datasets comes with its share of computational challenges.&lt;/p>
&lt;p>Here&amp;rsquo;s how we solved these challenges to make ColPali 13x faster without sacrificing the precision it’s known for.&lt;/p>
&lt;h2 id="the-scaling-dilemma">The Scaling Dilemma&lt;/h2>
&lt;p>ColPali generates &lt;strong>1,030 vectors for just one page of a PDF.&lt;/strong> While this is manageable for small-scale tasks, in a real-world production setting where you may need to store hundreds od thousands of PDFs, the challenge of scaling becomes significant.&lt;/p></description></item><item><title>Best Practices in RAG Evaluation: A Comprehensive Guide</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/rag-evaluation-guide/</link><pubDate>Sun, 24 Nov 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/rag-evaluation-guide/</guid><description>&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>This guide will teach you how to evaluate a RAG system for both &lt;strong>accuracy&lt;/strong> and &lt;strong>quality&lt;/strong>. You will learn to maintain RAG performance by testing for search precision, recall, contextual relevance, and response accuracy.&lt;/p>
&lt;p>&lt;strong>Building a RAG application is just the beginning;&lt;/strong> it is crucial to test its usefulness for the end-user and calibrate its components for long-term stability.&lt;/p>
&lt;p>RAG systems can encounter errors at any of the three crucial stages: retrieving relevant information, augmenting that information, and generating the final response. By systematically assessing and fine-tuning each component, you will be able to maintain a reliable and contextually relevant GenAI application that meets user needs.&lt;/p></description></item><item><title>Empowering QA.tech’s Testing Agents with Real-Time Precision and Scale</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-qatech/</link><pubDate>Thu, 21 Nov 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-qatech/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-qatech/qdrant-qatech-1.png" alt="qdrant-qatech-1">&lt;/p>
&lt;p>&lt;a href="https://qa.tech/" target="_blank" rel="noopener nofollow">QA.tech&lt;/a>, a company specializing in AI-driven automated testing solutions, found that building and &lt;strong>fully testing web applications, especially end-to-end, can be complex and time-consuming&lt;/strong>. Unlike unit tests, end-to-end tests reveal what’s actually happening in the browser, often uncovering issues that other methods miss.&lt;/p>
&lt;p>Traditional solutions like hard-coded tests are not only labor-intensive to set up but also challenging to maintain over time. Alternatively, hiring QA testers can be a solution, but for startups, it quickly becomes a bottleneck. With every release, more testers are needed, and if testing is outsourced, managing timelines and ensuring quality becomes even harder.&lt;/p></description></item><item><title>Advanced Retrieval with ColPali &amp; Qdrant Vector Database</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-colpali/</link><pubDate>Tue, 05 Nov 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-colpali/</guid><description>&lt;table>
 &lt;thead>
 &lt;tr>
 &lt;th>Time: 30 min&lt;/th>
 &lt;th>Level: Advanced&lt;/th>
 &lt;th>Notebook: &lt;a href="https://github.com/qdrant/examples/blob/master/colpali-and-binary-quantization/colpali_demo_binary.ipynb" target="_blank" rel="noopener nofollow">GitHub&lt;/a>&lt;/th>
 &lt;/tr>
 &lt;/thead>
 &lt;tbody>
 &lt;/tbody>
&lt;/table>
&lt;p>It’s no secret that even the most modern document retrieval systems have a hard time handling visually rich documents like &lt;strong>PDFs, containing tables, images, and complex layouts.&lt;/strong>&lt;/p>
&lt;p>ColPali introduces a multimodal retrieval approach that uses &lt;strong>Vision Language Models (VLMs)&lt;/strong> instead of the traditional OCR and text-based extraction.&lt;/p>
&lt;p>By processing document images directly, it creates &lt;strong>multi-vector embeddings&lt;/strong> from both the visual and textual content, capturing the document&amp;rsquo;s structure and context more effectively. This method outperforms traditional techniques, as demonstrated by the &lt;a href="https://huggingface.co/vidore" target="_blank" rel="noopener nofollow">&lt;strong>Visual Document Retrieval Benchmark (ViDoRe)&lt;/strong>&lt;/a>.&lt;/p></description></item><item><title>How Sprinklr Leverages Qdrant to Enhance AI-Driven Customer Experience Solutions</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-sprinklr/</link><pubDate>Thu, 17 Oct 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-sprinklr/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-sprinklr/image1.png" alt="case-study-sprinklr-1">&lt;/p>
&lt;p>&lt;a href="https://www.sprinklr.com/" target="_blank" rel="noopener nofollow">Sprinklr&lt;/a>, a leader in unified customer experience management (Unified-CXM), helps global brands engage customers meaningfully across more than 30 digital channels. To achieve this, Sprinklr needed a scalable solution for AI-powered search to support their AI applications, particularly in handling the vast data requirements of customer interactions.&lt;/p>
&lt;p>Raghav Sonavane, Associate Director of Machine Learning Engineering at Sprinklr, leads the Applied AI team, focusing on Generative AI (GenAI) and Retrieval-Augmented Generation (RAG). His team is responsible for training and fine-tuning in-house models and deploying advanced retrieval and generation systems for customer-facing applications like FAQ bots and other &lt;a href="https://www.sprinklr.com/blog/how-sprinklr-uses-RAG/" target="_blank" rel="noopener nofollow">GenAI-driven services&lt;/a>. The team provides all of these capabilities in a centralized platform to the Sprinklr product engineering teams.&lt;/p></description></item><item><title>Qdrant 1.12 - Distance Matrix, Facet Counting &amp; On-Disk Indexing</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.12.x/</link><pubDate>Tue, 08 Oct 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.12.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.12.0" target="_blank" rel="noopener nofollow">&lt;strong>Qdrant 1.12.0 is out!&lt;/strong>&lt;/a> Let&amp;rsquo;s look at major new features and a few minor additions:&lt;/p>
&lt;p>&lt;strong>Distance Matrix API:&lt;/strong> Efficiently calculate pairwise distances between vectors.&lt;/br>
&lt;strong>GUI Data Exploration&lt;/strong> Visually navigate your dataset and analyze vector relationships.&lt;/br>
&lt;strong>Faceting API:&lt;/strong> Dynamically aggregate and count unique values in specific fields.&lt;/br>&lt;/p>
&lt;p>&lt;strong>Text Index on disk:&lt;/strong> Reduce memory usage by storing text indexing data on disk.&lt;/br>
&lt;strong>Geo Index on disk:&lt;/strong> Offload indexed geographic data on disk for memory efficiency.&lt;/p></description></item><item><title>New DeepLearning.AI Course on Retrieval Optimization: From Tokenization to Vector Quantization</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-deeplearning-ai-course/</link><pubDate>Sun, 06 Oct 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-deeplearning-ai-course/</guid><description>&lt;p>We’re excited to announce a new course on DeepLearning.AI&amp;rsquo;s platform: &lt;a href="https://www.deeplearning.ai/short-courses/retrieval-optimization-from-tokenization-to-vector-quantization/?utm_campaign=qdrant-launch&amp;amp;utm_medium=qdrant&amp;amp;utm_source=partner-promo" target="_blank" rel="noopener nofollow">Retrieval Optimization: From Tokenization to Vector Quantization&lt;/a>. This collaboration between Qdrant and DeepLearning.AI aims to empower developers and data enthusiasts with the skills needed to enhance &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/advanced-search/">vector search&lt;/a> capabilities in their applications.&lt;/p>
&lt;p>Led by Qdrant’s Kacper Łukawski, this free, one-hour course is designed for beginners eager to delve into the world of retrieval optimization.&lt;/p>
&lt;h2 id="why-this-collaboration-matters">Why This Collaboration Matters&lt;/h2>
&lt;p>At Qdrant, we believe in the power of effective search to transform user experiences. Partnering with DeepLearning.AI allows us to combine our cutting-edge vector search technology with their educational expertise, providing learners with a comprehensive understanding of how to build and optimize &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/rag/rag-evaluation-guide/">Retrieval-Augmented Generation (RAG)&lt;/a> applications. This course is part of our commitment to equip the community with practical skills that leverage advanced machine learning techniques.&lt;/p></description></item><item><title>Introducing Qdrant for Startups</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-for-startups-launch/</link><pubDate>Wed, 02 Oct 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-for-startups-launch/</guid><description>&lt;h1 id="supporting-early-stage-startups">Supporting Early-Stage Startups&lt;/h1>
&lt;p>Over the past few years, we’ve witnessed some of the most innovative AI applications being built on Qdrant. A significant number of these have come from startups pushing the boundaries of what’s possible in AI. To ensure these pioneering teams have access to the right resources at the right time, we&amp;rsquo;re introducing &lt;strong>Qdrant for Startups&lt;/strong>. This initiative is designed to provide startups with the technical support, guidance, and infrastructure they need to scale their AI innovations quickly and effectively.&lt;/p></description></item><item><title>Qdrant and Shakudo: Secure &amp; Performant Vector Search in VPC Environments</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-shakudo/</link><pubDate>Mon, 23 Sep 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-shakudo/</guid><description>&lt;p>We are excited to announce that Qdrant has partnered with &lt;a href="https://www.shakudo.io/" target="_blank" rel="noopener nofollow">Shakudo&lt;/a>, bringing &lt;a href="https://qdrant.tech/hybrid-cloud/" target="_blank" rel="noopener nofollow">Qdrant Hybrid Cloud&lt;/a> to Shakudo’s virtual private cloud (VPC) deployments. This collaboration allows Shakudo clients to seamlessly integrate Qdrant’s high-performance vector database as a managed service into their private infrastructure, ensuring data sovereignty, scalability, and low-latency vector search for enterprise AI applications.&lt;/p>
&lt;h2 id="data-sovereignty-and-compliance-with-secure-vector-search">Data Sovereignty and Compliance with Secure Vector Search&lt;/h2>
&lt;p>Shakudo’s VPC deployments ensure that client data remains within their infrastructure, providing strict control over sensitive information while leveraging a fully managed AI toolset. Qdrant Hybrid Cloud is tailored for environments where data privacy and regulatory compliance are paramount. It keeps the data plane inside the customer&amp;rsquo;s infrastructure, with only essential telemetry shared externally, guaranteeing database isolation and security, while providing a fully managed service.&lt;/p></description></item><item><title>Data-Driven RAG Evaluation: Testing Qdrant Apps with Relari AI</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-relari/</link><pubDate>Mon, 16 Sep 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-relari/</guid><description>&lt;h1 id="using-performance-metrics-to-evaluate-rag-systems">Using Performance Metrics to Evaluate RAG Systems&lt;/h1>
&lt;p>Evaluating the performance of a &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/rag/">Retrieval-Augmented Generation (RAG)&lt;/a> application can be a complex task for developers.&lt;/p>
&lt;p>To help simplify this, Qdrant has partnered with &lt;a href="https://www.relari.ai" target="_blank" rel="noopener nofollow">Relari&lt;/a> to provide an in-depth &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/articles/rapid-rag-optimization-with-qdrant-and-quotient/">RAG evaluation&lt;/a> process.&lt;/p>
&lt;p>As a &lt;a href="https://qdrant.tech" target="_blank" rel="noopener nofollow">vector database&lt;/a>, Qdrant handles the data storage and retrieval, while Relari enables you to run experiments to assess how well your RAG app performs in real-world scenarios. Together, they allow for fast, iterative testing and evaluation, making it easier to keep up with your app&amp;rsquo;s development pace.&lt;/p></description></item><item><title>Nyris &amp; Qdrant: How Vectors are the Future of Visual Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-nyris/</link><pubDate>Tue, 10 Sep 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-nyris/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-nyris/nyris-case-study.png" alt="nyris-case-study">&lt;/p>
&lt;h2 id="about-nyris">About Nyris&lt;/h2>
&lt;p>Founded in 2015 by CTO Markus Lukasson and his sister Anna Lukasson-Herzig, &lt;a href="https://www.nyris.io/" target="_blank" rel="noopener nofollow">Nyris&lt;/a> offers advanced visual search solutions for companies, positioning itself as the &amp;ldquo;Google Lens&amp;rdquo; for corporate data. Their technology powers use cases such as visual search on websites of large retailers and machine manufacturing companies that require visual identification of spare parts. The primary goal is to identify items in a product catalog or spare parts as quickly as possible. With a strong foundation in e-commerce and nearly a decade of experience in vector search, Nyris is at the forefront of visual search innovation.&lt;/p></description></item><item><title>Kern AI &amp; Qdrant: Precision AI Solutions for Finance and Insurance</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kern/</link><pubDate>Wed, 28 Aug 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kern/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kern/kern-case-study.png" alt="kern-case-study">&lt;/p>
&lt;h2 id="about-kern-ai">About Kern AI&lt;/h2>
&lt;p>&lt;a href="https://kern.ai/" target="_blank" rel="noopener nofollow">Kern AI&lt;/a> specializes in data-centric AI. Originally an AI consulting firm, the team led by Co-Founder and CEO Johannes Hötter quickly realized that developers spend 80% of their time reviewing data instead of focusing on model development. This inefficiency significantly reduces the speed of development and adoption of AI. To tackle this challenge, Kern AI developed a low-code platform that enables developers to quickly analyze their datasets and identify outliers using vector search. This innovation led to enhanced data accuracy and streamlined workflows for the rapid deployment of AI applications.&lt;/p></description></item><item><title>Qdrant 1.11 - The Vector Stronghold: Optimizing Data Structures for Scale and Efficiency</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.11.x/</link><pubDate>Mon, 12 Aug 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.11.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.11.0" target="_blank" rel="noopener nofollow">Qdrant 1.11.0 is out!&lt;/a> This release largely focuses on features that improve memory usage and optimize segments. However, there are a few cool minor features, so let&amp;rsquo;s look at the whole list:&lt;/p>
&lt;p>Optimized Data Structures:&lt;/br>
&lt;strong>Defragmentation:&lt;/strong> Storage for multitenant workloads is more optimized and scales better.&lt;/br>
&lt;strong>On-Disk Payload Index:&lt;/strong> Store less frequently used data on disk, rather than in RAM.&lt;/br>
&lt;strong>UUID for Payload Index:&lt;/strong> Additional data types for payload can result in big memory savings.&lt;/p></description></item><item><title>Kairoswealth &amp; Qdrant: Transforming Wealth Management with AI-Driven Insights and Scalable Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kairoswealth/</link><pubDate>Wed, 10 Jul 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kairoswealth/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kairoswealth/image2.png" alt="Kairoswealth overview">&lt;/p>
&lt;h2 id="about-kairoswealth">&lt;strong>About Kairoswealth&lt;/strong>&lt;/h2>
&lt;p>&lt;a href="https://kairoswealth.com/" target="_blank" rel="noopener nofollow">Kairoswealth&lt;/a> is a comprehensive wealth management platform designed to provide users with a holistic view of their financial portfolio. The platform offers access to unique financial products and automates back-office operations through its AI assistant, Gaia.&lt;/p>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-kairoswealth/image3.png" alt="Dashboard Kairoswealth">&lt;/p>
&lt;h2 id="motivations-for-adopting-a-vector-database">&lt;strong>Motivations for Adopting a Vector Database&lt;/strong>&lt;/h2>
&lt;p>“At Kairoswealth we encountered several use cases necessitating the ability to run similarity queries on large datasets. Key applications included product recommendations and retrieval-augmented generation (RAG),” says &lt;a href="https://www.linkedin.com/in/vincent-teyssier/" target="_blank" rel="noopener nofollow">Vincent Teyssier&lt;/a>, Chief Technology &amp;amp; AI Officer at Kairoswealth. These needs drove the search for a more robust and scalable vector database solution.&lt;/p></description></item><item><title>Qdrant 1.10 - Universal Query, Built-in IDF &amp; ColBERT Support</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.10.x/</link><pubDate>Mon, 01 Jul 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.10.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.10.0" target="_blank" rel="noopener nofollow">Qdrant 1.10.0 is out!&lt;/a> This version introduces some major changes, so let&amp;rsquo;s dive right in:&lt;/p>
&lt;p>&lt;strong>Universal Query API:&lt;/strong> All search APIs, including Hybrid Search, are now in one Query endpoint.&lt;/br>
&lt;strong>Built-in IDF:&lt;/strong> We added the IDF mechanism to Qdrant&amp;rsquo;s core search and indexing processes.&lt;/br>
&lt;strong>Multivector Support:&lt;/strong> Native support for late interaction ColBERT is accessible via Query API.&lt;/p>
&lt;h2 id="one-endpoint-for-all-queries">One Endpoint for All Queries&lt;/h2>
&lt;p>&lt;strong>Query API&lt;/strong> will consolidate all search APIs into a single request. Previously, you had to work outside of the API to combine different search requests. Now these approaches are reduced to parameters of a single request, so you can avoid merging individual results.&lt;/p></description></item><item><title>Community Highlights #1</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/community-highlights-1/</link><pubDate>Thu, 20 Jun 2024 11:57:37 -0300</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/community-highlights-1/</guid><description>&lt;p>Welcome to the very first edition of Community Highlights, where we celebrate the most impactful contributions and achievements of our vector search community! 🎉&lt;/p>
&lt;h2 id="content-highlights-">Content Highlights 🚀&lt;/h2>
&lt;p>Here are some standout projects and articles from our community this past month. If you&amp;rsquo;re looking to learn more about vector search or build some great projects, we recommend you to check these guides:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>&lt;a href="https://towardsdev.com/implementing-advanced-agentic-vector-search-a-comprehensive-guide-to-crewai-and-qdrant-ca214ca4d039" target="_blank" rel="noopener nofollow">Implementing Advanced Agentic Vector Search&lt;/a>: A Comprehensive Guide to CrewAI and Qdrant by &lt;a href="https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/" target="_blank" rel="noopener nofollow">Pavan Kumar&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Build Your Own RAG Using &lt;a href="https://www.youtube.com/watch?v=m_3q3XnLlTI" target="_blank" rel="noopener nofollow">Unstructured, Llama3 via Groq, Qdrant &amp;amp; LangChain&lt;/a> by &lt;a href="https://www.linkedin.com/in/sudarshan-koirala/" target="_blank" rel="noopener nofollow">Sudarshan Koirala&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Qdrant filtering and &lt;a href="https://www.youtube.com/watch?v=iaXFggqqGD0" target="_blank" rel="noopener nofollow">self-querying retriever&lt;/a> retrieval with LangChain by &lt;a href="https://www.linkedin.com/in/infoslack/" target="_blank" rel="noopener nofollow">Daniel Romero&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>RAG Evaluation with &lt;a href="https://superlinked.com/vectorhub/articles/retrieval-augmented-generation-eval-qdrant-arize" target="_blank" rel="noopener nofollow">Arize Phoenix&lt;/a> by &lt;a href="https://www.linkedin.com/in/atitaarora/" target="_blank" rel="noopener nofollow">Atita Arora&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Building a Serverless Application with &lt;a href="https://medium.com/@benitomartin/building-a-serverless-application-with-aws-lambda-and-qdrant-for-semantic-search-ddb7646d4c2f" target="_blank" rel="noopener nofollow">AWS Lambda and Qdrant&lt;/a> for Semantic Search by &lt;a href="https://www.linkedin.com/in/benitomzh/" target="_blank" rel="noopener nofollow">Benito Martin&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Production ready Secure and &lt;a href="https://towardsdev.com/production-ready-secure-and-powerful-ai-implementations-with-azure-services-671b68631212" target="_blank" rel="noopener nofollow">Powerful AI Implementations with Azure Services&lt;/a> by &lt;a href="https://www.linkedin.com/in/kameshwara-pavan-kumar-mantha-91678b21/" target="_blank" rel="noopener nofollow">Pavan Kumar&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Building &lt;a href="https://medium.com/@joshmo_dev/building-agentic-rag-with-rust-openai-qdrant-d3a0bb85a267" target="_blank" rel="noopener nofollow">Agentic RAG with Rust, OpenAI &amp;amp; Qdrant&lt;/a> by &lt;a href="https://www.linkedin.com/in/joshua-mo-4146aa220/" target="_blank" rel="noopener nofollow">Joshua Mo&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Qdrant &lt;a href="https://medium.com/@nickprock/qdrant-hybrid-search-under-the-hood-using-haystack-355841225ac6" target="_blank" rel="noopener nofollow">Hybrid Search&lt;/a> under the hood using Haystack by &lt;a href="https://www.linkedin.com/in/nicolaprocopio/" target="_blank" rel="noopener nofollow">Nicola Procopio&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>&lt;a href="https://medium.com/@datadrifters/llama-3-powered-voice-assistant-integrating-local-rag-with-qdrant-whisper-and-langchain-b4d075b00ac5" target="_blank" rel="noopener nofollow">Llama 3 Powered Voice Assistant&lt;/a>: Integrating Local RAG with Qdrant, Whisper, and LangChain by &lt;a href="https://medium.com/@datadrifters" target="_blank" rel="noopener nofollow">Datadrifters&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>&lt;a href="https://medium.com/@vardhanam.daga/distributed-deployment-of-qdrant-cluster-with-sharding-replicas-e7923d483ebc" target="_blank" rel="noopener nofollow">Distributed deployment&lt;/a> of Qdrant cluster with sharding &amp;amp; replicas by &lt;a href="https://www.linkedin.com/in/vardhanam-daga/overlay/about-this-profile/" target="_blank" rel="noopener nofollow">Vardhanam Daga&lt;/a>&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Private &lt;a href="https://medium.com/aimpact-all-things-ai/building-private-healthcare-ai-assistant-for-clinics-using-qdrant-hybrid-cloud-jwt-rbac-dspy-and-089a772e08ae" target="_blank" rel="noopener nofollow">Healthcare AI Assistant&lt;/a> using Qdrant Hybrid Cloud, DSPy, and Groq by &lt;a href="https://www.linkedin.com/in/sachink1729/" target="_blank" rel="noopener nofollow">Sachin Khandewal&lt;/a>&lt;/strong>&lt;/li>
&lt;/ul>
&lt;h2 id="creator-of-the-month-">Creator of the Month 🌟&lt;/h2>
&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/community-highlights-1/creator-of-the-month-pavan.png" alt="Picture of Pavan Kumar with over 6 content contributions for the Creator of the Month" style="width: 70%;" />
&lt;p>Congratulations to Pavan Kumar for being awarded &lt;strong>Creator of the Month!&lt;/strong> Check out what were Pavan&amp;rsquo;s most valuable contributions to the Qdrant vector search community this past month:&lt;/p></description></item><item><title>Response to CVE-2024-3829: Arbitrary file upload vulnerability</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/cve-2024-3829-response/</link><pubDate>Mon, 10 Jun 2024 17:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/cve-2024-3829-response/</guid><description>&lt;h3 id="summary">Summary&lt;/h3>
&lt;p>A security vulnerability has been discovered in Qdrant affecting all versions
prior to v1.9, described in &lt;a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-3829" target="_blank" rel="noopener nofollow">CVE-2024-3829&lt;/a>.
The vulnerability allows an attacker to upload arbitrary files to the
filesystem, which can be used to gain remote code execution. This is a different but similar vulnerability to CVE-2024-2221, announced in April 2024.&lt;/p>
&lt;p>The vulnerability does not materially affect Qdrant cloud deployments, as that
filesystem is read-only and authentication is enabled by default. At worst,
the vulnerability could be used by an authenticated user to crash a cluster,
which is already possible, such as by uploading more vectors than can fit in RAM.&lt;/p></description></item><item><title>Qdrant Attains SOC 2 Type II Audit Report</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-soc2-type2-audit/</link><pubDate>Thu, 23 May 2024 20:26:20 -0300</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-soc2-type2-audit/</guid><description>&lt;p>At Qdrant, we are happy to announce the successful completion our the SOC 2 Type II Audit. This achievement underscores our unwavering commitment to upholding the highest standards of security, availability, and confidentiality for our services and our customers’ data.&lt;/p>
&lt;h2 id="soc-2-type-ii-what-is-it">SOC 2 Type II: What Is It?&lt;/h2>
&lt;p>SOC 2 Type II certification is an examination of an organization&amp;rsquo;s controls in reference to the American Institute of Certified Public Accountants &lt;a href="https://www.aicpa-cima.com/resources/download/2017-trust-services-criteria-with-revised-points-of-focus-2022" target="_blank" rel="noopener nofollow">(AICPA) Trust Services criteria&lt;/a>. It evaluates not only our written policies but also their practical implementation, ensuring alignment between our stated objectives and operational practices. Unlike Type I, which is a snapshot in time, Type II verifies over several months that the company has lived up to those controls. The report represents thorough auditing of our security procedures throughout this examination period: January 1, 2024 to April 7, 2024.&lt;/p></description></item><item><title>Introducing Qdrant Stars: Join Our Ambassador Program!</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-stars-announcement/</link><pubDate>Sun, 19 May 2024 11:57:37 -0300</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-stars-announcement/</guid><description>&lt;p>We&amp;rsquo;re excited to introduce &lt;strong>Qdrant Stars&lt;/strong>, our new ambassador program created to recognize and support Qdrant users making a strong impact in the AI and vector search space.&lt;/p>
&lt;p>Whether through innovative content, real-world applications tutorials, educational events, or engaging discussions, they are constantly making vector search more accessible and interesting to explore.&lt;/p>
&lt;h3 id="-say-hello-to-the-first-qdrant-stars">👋 Say hello to the first Qdrant Stars!&lt;/h3>
&lt;p>Our inaugural Qdrant Stars are a diverse and talented lineup who have shown exceptional dedication to our community. You might recognize some of their names:&lt;/p></description></item><item><title>Intel’s New CPU Powers Faster Vector Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-cpu-intel-benchmark/</link><pubDate>Fri, 10 May 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-cpu-intel-benchmark/</guid><description>&lt;h4 id="new-generation-silicon-is-a-game-changer-for-aiml-applications">New generation silicon is a game-changer for AI/ML applications&lt;/h4>
&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-cpu-intel-benchmark/qdrant-cpu-intel-benchmark.png" alt="qdrant cpu intel benchmark report">&lt;/p>
&lt;blockquote>
&lt;p>&lt;em>Intel’s 5th gen Xeon processor is made for enterprise-scale operations in vector space.&lt;/em>&lt;/p>
&lt;/blockquote>
&lt;p>Vector search is surging in popularity with institutional customers, and Intel is ready to support the emerging industry. Their latest generation CPU performed exceptionally with Qdrant, a leading vector database used for enterprise AI applications.&lt;/p>
&lt;p>Intel just released the latest Xeon processor (&lt;strong>codename: Emerald Rapids&lt;/strong>) for data centers, a market which is expected to grow to $45 billion. Emerald Rapids offers higher-performance computing and significant energy efficiency over previous generations. Compared to the 4th generation Sapphire Rapids, Emerald boosts AI inference performance by up to 42% and makes vector search 38% faster.&lt;/p></description></item><item><title>QSoC 2024: Announcing Our Interns!</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qsoc24-interns-announcement/</link><pubDate>Wed, 08 May 2024 16:44:22 -0300</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qsoc24-interns-announcement/</guid><description>&lt;p>We are excited to announce the interns selected for the inaugural Qdrant Summer of Code (QSoC) program! After receiving many impressive applications, we have chosen two talented individuals to work on the following projects:&lt;/p>
&lt;p>&lt;strong>&lt;a href="https://www.linkedin.com/in/j16n/" target="_blank" rel="noopener nofollow">Jishan Bhattacharya&lt;/a>: WASM-based Dimension Reduction Visualization&lt;/strong>&lt;/p>
&lt;p>Jishan will be implementing a dimension reduction algorithm in Rust, compiling it to WebAssembly (WASM), and integrating it with the Qdrant Web UI. This project aims to provide a more efficient and smoother visualization experience, enabling the handling of more data points and higher dimensions efficiently.&lt;/p></description></item><item><title>Are You Vendor Locked?</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/are-you-vendor-locked/</link><pubDate>Sun, 05 May 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/are-you-vendor-locked/</guid><description>&lt;p>We all are.&lt;/p>
&lt;blockquote>
&lt;p>&lt;em>“There is no use fighting it. Pick a vendor and go all in. Everything else is a mirage.”&lt;/em>
The last words of a seasoned IT professional&lt;/p>
&lt;/blockquote>
&lt;p>As long as we are using any product, our solution’s infrastructure will depend on its vendors. Many say that building custom infrastructure will hurt velocity. &lt;strong>Is this true in the age of AI?&lt;/strong>&lt;/p>
&lt;p>It depends on where your company is at. Most startups don’t survive more than five years, so putting too much effort into infrastructure is not the best use of their resources. You first need to survive and demonstrate product viability.&lt;/p></description></item><item><title>Visua and Qdrant: Vector Search in Computer Vision</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-visua/</link><pubDate>Wed, 01 May 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-visua/</guid><description>&lt;p>&lt;img src="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-visua/image1.png" alt="visua/image1.png">&lt;/p>
&lt;p>For over a decade, &lt;a href="https://visua.com/" target="_blank" rel="noopener nofollow">VISUA&lt;/a> has been a leader in precise, high-volume computer vision data analysis, developing a robust platform that caters to a wide range of use cases, from startups to large enterprises. Starting with social media monitoring, where it excels in analyzing vast data volumes to detect company logos, VISUA has built a diverse ecosystem of customers, including names in social media monitoring, like &lt;strong>Brandwatch&lt;/strong>, cybersecurity like &lt;strong>Mimecast&lt;/strong>, trademark protection like &lt;strong>Ebay&lt;/strong> and several sports agencies like &lt;strong>Vision Insights&lt;/strong> for sponsorship evaluation.&lt;/p></description></item><item><title>Qdrant 1.9.0 - Heighten Your Security With Role-Based Access Control Support</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.9.x/</link><pubDate>Wed, 24 Apr 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-1.9.x/</guid><description>&lt;p>&lt;a href="https://github.com/qdrant/qdrant/releases/tag/v1.9.0" target="_blank" rel="noopener nofollow">Qdrant 1.9.0 is out!&lt;/a> This version complements the release of our new managed product &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a> with key security features valuable to our enterprise customers, and all those looking to productionize large-scale Generative AI. &lt;strong>Data privacy, system stability and resource optimizations&lt;/strong> are always on our mind - so let&amp;rsquo;s see what&amp;rsquo;s new:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Granular access control:&lt;/strong> You can further specify access control levels by using JSON Web Tokens.&lt;/li>
&lt;li>&lt;strong>Optimized shard transfers:&lt;/strong> The synchronization of shards between nodes is now significantly faster!&lt;/li>
&lt;li>&lt;strong>Support for byte embeddings:&lt;/strong> Reduce the memory footprint of Qdrant with official &lt;code>uint8&lt;/code> support.&lt;/li>
&lt;/ul>
&lt;h2 id="new-access-control-options-via-json-web-tokens">New access control options via JSON Web Tokens&lt;/h2>
&lt;p>Historically, our API key supported basic read and write operations. However, recognizing the evolving needs of our user base, especially large organizations, we&amp;rsquo;ve implemented additional options for finer control over data access within internal environments.&lt;/p></description></item><item><title>Qdrant's Trusted Partners for Hybrid Cloud Deployment</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-launch-partners/</link><pubDate>Mon, 15 Apr 2024 00:02:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-launch-partners/</guid><description>&lt;p>With the launch of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a> we provide developers the ability to deploy Qdrant as a managed vector database in any desired environment, be it &lt;em>in the cloud, on premise, or on the edge&lt;/em>.&lt;/p>
&lt;p>We are excited to have trusted industry players support the launch of Qdrant Hybrid Cloud, allowing developers to unlock best-in-class advantages for building production-ready AI applications:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Deploy In Your Own Environment:&lt;/strong> Deploy the Qdrant vector database as a managed service on the infrastructure of choice, such as our launch partner solutions &lt;a href="https://blogs.oracle.com/cloud-infrastructure/post/qdrant-hybrid-cloud-now-available-oci-customers" target="_blank" rel="noopener nofollow">Oracle Cloud Infrastructure (OCI)&lt;/a>, &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-red-hat-openshift/">Red Hat OpenShift&lt;/a>, &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-vultr/">Vultr&lt;/a>, &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-digitalocean/">DigitalOcean&lt;/a>, &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-ovhcloud/">OVHcloud&lt;/a>, &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-scaleway/">Scaleway&lt;/a>, &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/documentation/hybrid-cloud/platform-deployment-options/#civo">Civo&lt;/a>, and &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud-stackit/">STACKIT&lt;/a>.&lt;/p></description></item><item><title>Qdrant Hybrid Cloud: the First Managed Vector Database You Can Run Anywhere</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud/</link><pubDate>Mon, 15 Apr 2024 00:01:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/hybrid-cloud/</guid><description>&lt;p>We are excited to announce the official launch of &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/hybrid-cloud/">Qdrant Hybrid Cloud&lt;/a> today, a significant leap forward in the field of vector search and enterprise AI. Rooted in our open-source origin, we are committed to offering our users and customers unparalleled control and sovereignty over their data and vector search workloads. Qdrant Hybrid Cloud stands as &lt;strong>the industry&amp;rsquo;s first managed vector database that can be deployed in any environment&lt;/strong> - be it cloud, on-premise, or the edge.&lt;/p></description></item><item><title>Advancements and Challenges in RAG Systems - Syed Asad | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/rag-advancements-challenges/</link><pubDate>Thu, 11 Apr 2024 22:25:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/rag-advancements-challenges/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;The problem with many of the vector databases is that they work fine, they are scalable. This is common. The problem is that they are not easy to use. So that is why I always use Qdrant.”&lt;/em>&lt;br>
— Syed Asad&lt;/p>
&lt;/blockquote>
&lt;p>Syed Asad is an accomplished AI/ML Professional, specializing in LLM Operations and RAGs. With a focus on Image Processing and Massive Scale Vector Search Operations, he brings a wealth of expertise to the field. His dedication to advancing artificial intelligence and machine learning technologies has been instrumental in driving innovation and solving complex challenges. Syed continues to push the boundaries of AI/ML applications, contributing significantly to the ever-evolving landscape of the industry.&lt;/p></description></item><item><title>Building Search/RAG for an OpenAPI spec - Nick Khami | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/building-search-rag-open-api/</link><pubDate>Thu, 11 Apr 2024 22:23:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/building-search-rag-open-api/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;It&amp;rsquo;s very, very simple to build search over an Open API specification with a tool like Trieve and Qdrant. I think really there&amp;rsquo;s something to highlight here and how awesome it is to work with a group based system if you&amp;rsquo;re using Qdrant.”&lt;/em>&lt;br>
— Nick Khami&lt;/p>
&lt;/blockquote>
&lt;p>Nick Khami, a seasoned full-stack engineer, has been deeply involved in the development of vector search and RAG applications since the inception of Qdrant v0.11.0 back in October 2022. His expertise and passion for innovation led him to establish Trieve, a company dedicated to facilitating businesses in embracing cutting-edge vector search and RAG technologies.&lt;/p></description></item><item><title>Iveta Lohovska on Gen AI and Vector Search | Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/gen-ai-and-vector-search/</link><pubDate>Thu, 11 Apr 2024 22:12:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/gen-ai-and-vector-search/</guid><description>&lt;h1 id="exploring-gen-ai-and-vector-search-insights-from-iveta-lohovska">Exploring Gen AI and Vector Search: Insights from Iveta Lohovska&lt;/h1>
&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;In the generative AI context of AI, all foundational models have been trained on some foundational data sets that are distributed in different ways. Some are very conversational, some are very technical, some are on, let&amp;rsquo;s say very strict taxonomy like healthcare or chemical structures. We call them modalities, and they have different representations.”&lt;/em>&lt;br>
— Iveta Lohovska&lt;/p>
&lt;/blockquote>
&lt;p>Iveta Lohovska serves as the Chief Technologist and Principal Data Scientist for AI and Supercomputing at &lt;a href="https://www.hpe.com/us/en/home.html" target="_blank" rel="noopener nofollow">Hewlett Packard Enterprise (HPE)&lt;/a>, where she champions the democratization of decision intelligence and the development of ethical AI solutions. An industry leader, her multifaceted expertise encompasses natural language processing, computer vision, and data mining. Committed to leveraging technology for societal benefit, Iveta is a distinguished technical advisor to the United Nations&amp;rsquo; AI for Good program and a Data Science lecturer at the Vienna University of Applied Sciences. Her career also includes impactful roles with the World Bank Group, focusing on open data initiatives and Sustainable Development Goals (SDGs), as well as collaborations with USAID and the Gates Foundation.&lt;/p></description></item><item><title>Teaching Vector Databases at Scale - Alfredo Deza | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/teaching-vector-db-at-scale/</link><pubDate>Tue, 09 Apr 2024 03:06:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/teaching-vector-db-at-scale/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;So usually I get asked, why are you using Qdrant? What&amp;rsquo;s the big deal? Why are you picking these over all of the other ones? And to me it boils down to, aside from being renowned or recognized, that it works fairly well. There&amp;rsquo;s one core component that is critical here, and that is it has to be very straightforward, very easy to set up so that I can teach it, because if it&amp;rsquo;s easy, well, sort of like easy to or straightforward to teach, then you can take the next step and you can make it a little more complex, put other things around it, and that creates a great development experience and a learning experience as well.”&lt;/em>&lt;br>
— Alfredo Deza&lt;/p></description></item><item><title>How to meow on the long tail with Cheshire Cat AI? - Piero and Nicola | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/meow-with-cheshire-cat/</link><pubDate>Tue, 09 Apr 2024 03:05:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/meow-with-cheshire-cat/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;We love Qdrant! It is our default DB. We support it in three different forms, file based, container based, and cloud based as well.”&lt;/em>&lt;br>
— Piero Savastano&lt;/p>
&lt;/blockquote>
&lt;p>Piero Savastano is the Founder and Maintainer of the open-source project, Cheshire Cat AI. He started in Deep Learning pure research. He wrote his first neural network from scratch at the age of 19. After a period as a researcher at La Sapienza and CNR, he provides international consulting, training, and mentoring services in the field of machine and deep learning. He spreads Artificial Intelligence awareness on YouTube and TikTok.&lt;/p></description></item><item><title>Response to CVE-2024-2221: Arbitrary file upload vulnerability</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/cve-2024-2221-response/</link><pubDate>Fri, 05 Apr 2024 13:00:00 -0700</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/cve-2024-2221-response/</guid><description>&lt;h3 id="summary">Summary&lt;/h3>
&lt;p>A security vulnerability has been discovered in Qdrant affecting all versions
prior to v1.9, described in &lt;a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-2221" target="_blank" rel="noopener nofollow">CVE-2024-2221&lt;/a>.
The vulnerability allows an attacker to upload arbitrary files to the
filesystem, which can be used to gain remote code execution.&lt;/p>
&lt;p>The vulnerability does not materially affect Qdrant cloud deployments, as that
filesystem is read-only and authentication is enabled by default. At worst,
the vulnerability could be used by an authenticated user to crash a cluster,
which is already possible, such as by uploading more vectors than can fit in RAM.&lt;/p></description></item><item><title>Introducing FastLLM: Qdrant’s Revolutionary LLM</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/fastllm-announcement/</link><pubDate>Mon, 01 Apr 2024 00:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/fastllm-announcement/</guid><description>&lt;p>Today, we&amp;rsquo;re happy to announce that &lt;strong>FastLLM (FLLM)&lt;/strong>, our lightweight Language Model tailored specifically for Retrieval Augmented Generation (RAG) use cases, has officially entered Early Access!&lt;/p>
&lt;p>Developed to seamlessly integrate with Qdrant, &lt;strong>FastLLM&lt;/strong> represents a significant leap forward in AI-driven content generation. Up to this point, LLM’s could only handle up to a few million tokens.&lt;/p>
&lt;p>&lt;strong>As of today, FLLM offers a context window of 1 billion tokens.&lt;/strong>&lt;/p>
&lt;p>However, what sets FastLLM apart is its optimized architecture, making it the ideal choice for RAG applications. With minimal effort, you can combine FastLLM and Qdrant to launch applications that process vast amounts of data. Leveraging the power of Qdrant&amp;rsquo;s scalability features, FastLLM promises to revolutionize how enterprise AI applications generate and retrieve content at massive scale.&lt;/p></description></item><item><title>VirtualBrain: Best RAG to unleash the real power of AI - Guillaume Marquis | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/virtualbrain-best-rag/</link><pubDate>Wed, 27 Mar 2024 12:41:51 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/virtualbrain-best-rag/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;It&amp;rsquo;s like mandatory to have a vector database that is scalable, that is fast, that has low latencies, that can under parallel request a large amount of requests. So you have really this need and Qdrant was like an obvious choice.”&lt;/em>&lt;br>
— Guillaume Marquis&lt;/p>
&lt;/blockquote>
&lt;p>Guillaume Marquis, a dedicated Engineer and AI enthusiast, serves as the Chief Technology Officer and Co-Founder of VirtualBrain, an innovative AI company. He is committed to exploring novel approaches to integrating artificial intelligence into everyday life, driven by a passion for advancing the field and its applications.&lt;/p></description></item><item><title>Talk with YouTube without paying a cent - Francesco Saverio Zuppichini | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/youtube-without-paying-cent/</link><pubDate>Wed, 27 Mar 2024 12:37:55 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/youtube-without-paying-cent/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;Now I do believe that Qdrant, I&amp;rsquo;m not sponsored by Qdrant, but I do believe it&amp;rsquo;s the best one for a couple of reasons. And we&amp;rsquo;re going to see them mostly because I can just run it on my computer so it&amp;rsquo;s full private and I&amp;rsquo;m in charge of my data.”&lt;/em>&lt;br>
&amp;ndash; Francesco Saverio Zuppichini&lt;/p>
&lt;/blockquote>
&lt;p>Francesco Saverio Zuppichini is a Senior Full Stack Machine Learning Engineer at Zurich Insurance with experience in both large corporations and startups of various sizes. He is passionate about sharing knowledge, and building communities, and is known as a skilled practitioner in computer vision. He is proud of the community he built because of all the amazing people he got to know.&lt;/p></description></item><item><title>Qdrant is Now Available on Azure Marketplace!</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/azure-marketplace/</link><pubDate>Tue, 26 Mar 2024 10:30:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/azure-marketplace/</guid><description>&lt;p>We&amp;rsquo;re thrilled to announce that Qdrant is now &lt;a href="https://azuremarketplace.microsoft.com/en-en/marketplace/apps/qdrantsolutionsgmbh1698769709989.qdrant-db" target="_blank" rel="noopener nofollow">officially available on Azure Marketplace&lt;/a>, bringing enterprise-level vector search directly to Azure&amp;rsquo;s vast community of users. This integration marks a significant milestone in our journey to make Qdrant more accessible and convenient for businesses worldwide.&lt;/p>
&lt;blockquote>
&lt;p>&lt;em>With the landscape of AI being complex for most customers, Qdrant&amp;rsquo;s ease of use provides an easy approach for customers&amp;rsquo; implementation of RAG patterns for Generative AI solutions and additional choices in selecting AI components on Azure,&lt;/em> - Tara Walker, Principal Software Engineer at Microsoft.&lt;/p></description></item><item><title>Production-scale RAG for Real-Time News Distillation - Robert Caulk | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/real-time-news-distillation-rag/</link><pubDate>Mon, 25 Mar 2024 08:49:22 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/real-time-news-distillation-rag/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;We&amp;rsquo;ve got a lot of fun challenges ahead of us in the industry, I think, and the industry is establishing best practices. Like you said, everybody&amp;rsquo;s just trying to figure out what&amp;rsquo;s going on. And some of these base layer tools like Qdrant really enable products and enable companies and they enable us.”&lt;/em>&lt;br>
&amp;ndash; Robert Caulk&lt;/p>
&lt;/blockquote>
&lt;p>Robert, Founder of Emergent Methods is a scientist by trade, dedicating his career to a variety of open-source projects that range from large-scale artificial intelligence to discrete element modeling. He is currently working with a team at Emergent Methods to adaptively model over 1 million news articles per day, with a goal of reducing media bias and improving news awareness.&lt;/p></description></item><item><title>Insight Generation Platform for LifeScience Corporation - Hooman Sedghamiz | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/insight-generation-platform/</link><pubDate>Mon, 25 Mar 2024 08:46:28 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/insight-generation-platform/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;There is this really great vector db comparison that came out recently. I saw there are like maybe more than 40 vector stores in 2024. When we started back in 2023, there were only a few. What I see, which is really lacking in this pipeline of retrieval augmented generation is major innovation around data pipeline.”&lt;/em>&lt;br>
&amp;ndash; Hooman Sedghamiz&lt;/p>
&lt;/blockquote>
&lt;p>Hooman Sedghamiz, Sr. Director AI/ML - Insights at Bayer AG is a distinguished figure in AI and ML in the life sciences field. With years of experience, he has led teams and projects that have greatly advanced medical products, including implantable and wearable devices. Notably, he served as the Generative AI product owner and Senior Director at Bayer Pharmaceuticals, where he played a pivotal role in developing a GPT-based central platform for precision medicine.&lt;/p></description></item><item><title>The challenges in using LLM-as-a-Judge - Sourabh Agrawal | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/llm-as-a-judge/</link><pubDate>Tue, 19 Mar 2024 15:05:02 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/llm-as-a-judge/</guid><description>&lt;blockquote>
&lt;p>&amp;ldquo;&lt;em>You don&amp;rsquo;t want to use an expensive model like GPT 4 for evaluation, because then the cost adds up and it does not work out. If you are spending more on evaluating the responses, you might as well just do something else, like have a human to generate the responses.&lt;/em>”&lt;br>
&amp;ndash; Sourabh Agrawal&lt;/p>
&lt;/blockquote>
&lt;p>Sourabh Agrawal, CEO &amp;amp; Co-Founder at UpTrain AI is a seasoned entrepreneur and AI/ML expert with a diverse background. He began his career at Goldman Sachs, where he developed machine learning models for financial markets. Later, he contributed to the autonomous driving team at Bosch/Mercedes, focusing on computer vision modules for scene understanding. In 2020, Sourabh ventured into entrepreneurship, founding an AI-powered fitness startup that gained over 150,000 users. Throughout his career, he encountered challenges in evaluating AI models, particularly Generative AI models. To address this issue, Sourabh is developing UpTrain, an open-source LLMOps tool designed to evaluate, test, and monitor LLM applications. UpTrain provides scores and offers insights to enhance LLM applications by performing root-cause analysis, identifying common patterns among failures, and providing automated suggestions for resolution.&lt;/p></description></item><item><title>Vector Search for Content-Based Video Recommendation - Gladys and Samuel from Dailymotion</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-search-vector-recommendation/</link><pubDate>Tue, 19 Mar 2024 14:08:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-search-vector-recommendation/</guid><description>&lt;blockquote>
&lt;p>&amp;ldquo;&lt;em>The vector search engine that we chose is Qdrant, but why did we choose it? Actually, it answers all the load constraints and the technical needs that we had. It allows us to do a fast neighbor search. It has a python API which matches the recommender tag that we have.&lt;/em>”&lt;br>
&amp;ndash; Gladys Roch&lt;/p>
&lt;/blockquote>
&lt;p>Gladys Roch is a French Machine Learning Engineer at Dailymotion working on recommender systems for video content.&lt;/p></description></item><item><title>Integrating Qdrant and LangChain for Advanced Vector Similarity Search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/using-qdrant-and-langchain/</link><pubDate>Tue, 12 Mar 2024 09:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/using-qdrant-and-langchain/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;Building AI applications doesn&amp;rsquo;t have to be complicated. You can leverage pre-trained models and support complex pipelines with a few lines of code. LangChain provides a unified interface, so that you can avoid writing boilerplate code and focus on the value you want to bring.&amp;rdquo;&lt;/em> Kacper Lukawski, Developer Advocate, Qdrant&lt;/p>
&lt;/blockquote>
&lt;h2 id="long-term-memory-for-your-genai-app">Long-Term Memory for Your GenAI App&lt;/h2>
&lt;p>Qdrant&amp;rsquo;s vector database quickly grew due to its ability to make Generative AI more effective. On its own, an LLM can be used to build a process-altering invention. With Qdrant, you can turn this invention into a production-level app that brings real business value.&lt;/p></description></item><item><title>IrisAgent and Qdrant: Redefining Customer Support with AI</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/iris-agent-qdrant/</link><pubDate>Wed, 06 Mar 2024 07:45:34 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/iris-agent-qdrant/</guid><description>&lt;p>Artificial intelligence is evolving customer support, offering unprecedented capabilities for automating interactions, understanding user needs, and enhancing the overall customer experience. &lt;a href="https://irisagent.com/" target="_blank" rel="noopener nofollow">IrisAgent&lt;/a>, founded by former Google product manager &lt;a href="https://www.linkedin.com/in/palakdalal/" target="_blank" rel="noopener nofollow">Palak Dalal Bhatia&lt;/a>, demonstrates the concrete impact of AI on customer support with its AI-powered customer support automation platform.&lt;/p>
&lt;p>Bhatia describes IrisAgent as “the system of intelligence which sits on top of existing systems of records like support tickets, engineering bugs, sales data, or product data,” with the main objective of leveraging AI and generative AI, to automatically detect the intent and tags behind customer support tickets, reply to a large number of support tickets chats improve the time to resolution and increase the deflection rate of support teams. Ultimately, IrisAgent enables support teams to more with less and be more effective in helping customers.&lt;/p></description></item><item><title>Dailymotion's Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-dailymotion/</link><pubDate>Tue, 27 Feb 2024 13:22:31 +0100</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-dailymotion/</guid><description>&lt;h2 id="dailymotions-journey-to-crafting-the-ultimate-content-driven-video-recommendation-engine-with-qdrant-vector-database">Dailymotion&amp;rsquo;s Journey to Crafting the Ultimate Content-Driven Video Recommendation Engine with Qdrant Vector Database&lt;/h2>
&lt;p>In today&amp;rsquo;s digital age, the consumption of video content has become ubiquitous, with an overwhelming abundance of options available at our fingertips. However, amidst this vast sea of videos, the challenge lies not in finding content, but in discovering the content that truly resonates with individual preferences and interests and yet is diverse enough to not throw users into their own filter bubble. As viewers, we seek meaningful and relevant videos that enrich our experiences, provoke thought, and spark inspiration.&lt;/p></description></item><item><title>Qdrant vs Pinecone: Vector Databases for AI Apps</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/comparing-qdrant-vs-pinecone-vector-databases/</link><pubDate>Sun, 25 Feb 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/comparing-qdrant-vs-pinecone-vector-databases/</guid><description>&lt;h1 id="qdrant-vs-pinecone-an-analysis-of-vector-databases-for-ai-applications">Qdrant vs Pinecone: An Analysis of Vector Databases for AI Applications&lt;/h1>
&lt;p>Data forms the foundation upon which AI applications are built. Data can exist in both structured and unstructured formats. Structured data typically has well-defined schemas or inherent relationships. However, unstructured data, such as text, image, audio, or video, must first be converted into numerical representations known as &lt;a href="https://qdrant.tech/articles/what-are-embeddings/" target="_blank" rel="noopener nofollow">vector embeddings&lt;/a>. These embeddings encapsulate the semantic meaning or features of unstructured data and are in the form of high-dimensional vectors.&lt;/p></description></item><item><title>What is Vector Similarity? Understanding its Role in AI Applications.</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/what-is-vector-similarity/</link><pubDate>Sat, 24 Feb 2024 00:00:00 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/what-is-vector-similarity/</guid><description>&lt;h1 id="understanding-vector-similarity-powering-next-gen-ai-applications">Understanding Vector Similarity: Powering Next-Gen AI Applications&lt;/h1>
&lt;p>A core function of a wide range of AI applications is to first understand the &lt;em>meaning&lt;/em> behind a user query, and then provide &lt;em>relevant&lt;/em> answers to the questions that the user is asking. With increasingly advanced interfaces and applications, this query can be in the form of language, or an image, an audio, video, or other forms of &lt;em>unstructured&lt;/em> data.&lt;/p>
&lt;p>On an ecommerce platform, a user can, for instance, try to find ‘clothing for a trek’, when they actually want results around ‘waterproof jackets’, or ‘winter socks’. Keyword, or full-text, or even synonym search would fail to provide any response to such a query. Similarly, on a music app, a user might be looking for songs that sound similar to an audio clip they have heard. Or, they might want to look up furniture that has a similar look as the one they saw on a trip.&lt;/p></description></item><item><title>DSPy vs LangChain: A Comprehensive Framework Comparison</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/dspy-vs-langchain/</link><pubDate>Fri, 23 Feb 2024 08:00:00 -0300</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/dspy-vs-langchain/</guid><description>&lt;h1 id="the-evolving-landscape-of-ai-frameworks">The Evolving Landscape of AI Frameworks&lt;/h1>
&lt;p>As Large Language Models (LLMs) and vector stores have become steadily more powerful, a new generation of frameworks has appeared which can streamline the development of AI applications by leveraging LLMs and vector search technology. These frameworks simplify the process of building everything from Retrieval Augmented Generation (RAG) applications to complex chatbots with advanced conversational abilities, and even sophisticated reasoning-driven AI applications.&lt;/p>
&lt;p>The most well-known of these frameworks is possibly &lt;a href="https://github.com/langchain-ai/langchain" target="_blank" rel="noopener nofollow">LangChain&lt;/a>. &lt;a href="https://en.wikipedia.org/wiki/LangChain" target="_blank" rel="noopener nofollow">Launched in October 2022&lt;/a> as an open-source project by Harrison Chase, the project quickly gained popularity, attracting contributions from hundreds of developers on GitHub. LangChain excels in its broad support for documents, data sources, and APIs. This, along with seamless integration with vector stores like Qdrant and the ability to chain multiple LLMs, has allowed developers to build complex AI applications without reinventing the wheel.&lt;/p></description></item><item><title>Qdrant Summer of Code 24</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-summer-of-code-24/</link><pubDate>Wed, 21 Feb 2024 00:39:53 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-summer-of-code-24/</guid><description>&lt;p>Google Summer of Code (#GSoC) is celebrating its 20th anniversary this year with the 2024 program. Over the past 20 years, 19K new contributors were introduced to #opensource through the program under the guidance of thousands of mentors from over 800 open-source organizations in various fields. Qdrant participated successfully in the program last year. Both projects, the UI Dashboard with unstructured data visualization and the advanced Geo Filtering, were completed in time and are now a part of the engine. One of the two young contributors joined the team and continues working on the project.&lt;/p></description></item><item><title>Dust and Qdrant: Using AI to Unlock Company Knowledge and Drive Employee Productivity</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/dust-and-qdrant/</link><pubDate>Tue, 06 Feb 2024 07:03:26 -0800</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/dust-and-qdrant/</guid><description>&lt;p>One of the major promises of artificial intelligence is its potential to
accelerate efficiency and productivity within businesses, empowering employees
and teams in their daily tasks. The French company &lt;a href="https://dust.tt/" target="_blank" rel="noopener nofollow">Dust&lt;/a>, co-founded by former
Open AI Research Engineer &lt;a href="https://www.linkedin.com/in/spolu/" target="_blank" rel="noopener nofollow">Stanislas Polu&lt;/a>, set out to deliver on this promise by
providing businesses and teams with an expansive platform for building
customizable and secure AI assistants.&lt;/p>
&lt;h2 id="challenge">Challenge&lt;/h2>
&lt;p>&amp;ldquo;The past year has shown that large language models (LLMs) are very useful but
complicated to deploy,&amp;rdquo; Polu says, especially in the context of their
application across business functions. This is why he believes that the goal of
augmenting human productivity at scale is especially a product unlock and not
only a research unlock, with the goal to identify the best way for companies to
leverage these models. Therefore, Dust is creating a product that sits between
humans and the large language models, with the focus on supporting the work of
a team within the company to ultimately enhance employee productivity.&lt;/p></description></item><item><title>The Bitter Lesson of Retrieval in Generative Language Model Workflows - Mikko Lehtimäki | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/bitter-lesson-generative-language-model/</link><pubDate>Mon, 29 Jan 2024 16:31:02 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/bitter-lesson-generative-language-model/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;If you haven&amp;rsquo;t heard of the bitter lesson, it&amp;rsquo;s actually a theorem. It&amp;rsquo;s based on a blog post by Ricard Sutton, and it states basically that based on what we have learned from the development of machine learning and artificial intelligence systems in the previous decades, the methods that can leverage data and compute tends to or will eventually outperform the methods that are designed or handcrafted by humans.”&lt;/em>&lt;br>
&amp;ndash; Mikko Lehtimäki&lt;/p></description></item><item><title>Indexify Unveiled - Diptanu Gon Choudhury | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/indexify-content-extraction-engine/</link><pubDate>Fri, 26 Jan 2024 16:40:55 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/indexify-content-extraction-engine/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;We have something like Qdrant, which is very geared towards doing Vector search. And so we understand the shape of the storage system now.”&lt;/em>&lt;br>
— Diptanu Gon Choudhury&lt;/p>
&lt;/blockquote>
&lt;p>Diptanu Gon Choudhury is the founder of Tensorlake. They are building Indexify - an open-source scalable structured extraction engine for unstructured data to build near-real-time knowledgebase for AI/agent-driven workflows and query engines. Before building Indexify, Diptanu created the Nomad cluster scheduler at Hashicorp, inventor of the Titan/Titus cluster scheduler at Netflix, led the FBLearner machine learning platform, and built the real-time speech inference engine at Facebook.&lt;/p></description></item><item><title>Unlocking AI Potential: Insights from Stanislas Polu</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-x-dust-vector-search/</link><pubDate>Fri, 26 Jan 2024 16:22:37 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-x-dust-vector-search/</guid><description>&lt;h1 id="qdrant-x-dust-how-vector-search-helps-make-work-better-with-stanislas-polu">Qdrant x Dust: How Vector Search Helps Make Work Better with Stanislas Polu&lt;/h1>
&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;We ultimately chose Qdrant due to its open-source nature, strong performance, being written in Rust, comprehensive documentation, and the feeling of control.”&lt;/em>&lt;br>
&amp;ndash; Stanislas Polu&lt;/p>
&lt;/blockquote>
&lt;p>Stanislas Polu is the Co-Founder and an Engineer at Dust. He had previously sold a company to Stripe and spent 5 years there, seeing them grow from 80 to 3000 people. Then pivoted to research at OpenAI on large language models and mathematical reasoning capabilities. He started Dust 6 months ago to make work work better with LLMs.&lt;/p></description></item><item><title>Announcing Qdrant's $28M Series A Funding Round</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/series-a-funding-round/</link><pubDate>Tue, 23 Jan 2024 09:00:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/series-a-funding-round/</guid><description>&lt;p>Today, we are excited to announce our $28M Series A funding round, which is led by Spark Capital with participation from our existing investors Unusual Ventures and 42CAP.&lt;/p>
&lt;p>We have seen incredible user growth and support from our open-source community in the past two years - recently exceeding 5M downloads. This is a testament to our mission to build the most efficient, scalable, high-performance vector database on the market. We are excited to further accelerate this trajectory with our new partner and investor, Spark Capital, and the continued support of Unusual Ventures and 42CAP. This partnership uniquely positions us to empower enterprises with cutting edge vector search technology to build truly differentiating, next-gen AI applications at scale.&lt;/p></description></item><item><title>Introducing Qdrant Cloud on Microsoft Azure</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-cloud-on-microsoft-azure/</link><pubDate>Wed, 17 Jan 2024 08:40:42 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-cloud-on-microsoft-azure/</guid><description>&lt;p>Great news! We&amp;rsquo;ve expanded Qdrant&amp;rsquo;s managed vector database offering — &lt;a href="https://cloud.qdrant.io/" target="_blank" rel="noopener nofollow">Qdrant Cloud&lt;/a> — to be available on Microsoft Azure.
You can now effortlessly set up your environment on Azure, which reduces deployment time, so you can hit the ground running.&lt;/p>
&lt;p>&lt;a href="https://cloud.qdrant.io/" target="_blank" rel="noopener nofollow">Get started&lt;/a>&lt;/p>
&lt;p>What this means for you:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Rapid application development&lt;/strong>: Deploy your own cluster through the Qdrant Cloud Console within seconds and scale your resources as needed.&lt;/li>
&lt;li>&lt;strong>Billion vector scale&lt;/strong>: Seamlessly grow and handle large-scale datasets with billions of vectors. Leverage Qdrant features like horizontal scaling and binary quantization with Microsoft Azure&amp;rsquo;s scalable infrastructure.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>&amp;ldquo;With Qdrant, we found the missing piece to develop our own provider independent multimodal generative AI platform at enterprise scale.&amp;rdquo;&lt;/strong> &amp;ndash; Jeremy Teichmann (AI Squad Technical Lead &amp;amp; Generative AI Expert), Daly Singh (AI Squad Lead &amp;amp; Product Owner) - Bosch Digital.&lt;/p></description></item><item><title>Qdrant Updated Benchmarks 2024</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-benchmarks-2024/</link><pubDate>Mon, 15 Jan 2024 09:29:33 -0300</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-benchmarks-2024/</guid><description>&lt;p>It&amp;rsquo;s time for an update to Qdrant&amp;rsquo;s benchmarks!&lt;/p>
&lt;p>We&amp;rsquo;ve compared how Qdrant performs against the other vector search engines to give you a thorough performance analysis. Let&amp;rsquo;s get into what&amp;rsquo;s new and what remains the same in our approach.&lt;/p>
&lt;h3 id="whats-changed">What&amp;rsquo;s Changed?&lt;/h3>
&lt;h4 id="all-engines-have-improved">All engines have improved&lt;/h4>
&lt;p>Since the last time we ran our benchmarks, we received a bunch of suggestions on how to run other engines more efficiently, and we applied them.&lt;/p></description></item><item><title>Navigating challenges and innovations in search technologies</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/navigating-challenges-innovations/</link><pubDate>Fri, 12 Jan 2024 15:39:53 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/navigating-challenges-innovations/</guid><description>&lt;h2 id="navigating-challenges-and-innovations-in-search-technologies">Navigating challenges and innovations in search technologies&lt;/h2>
&lt;p>We participated in a &lt;a href="#podcast-discussion-recap">podcast&lt;/a> on search technologies, specifically with retrieval-augmented generation (RAG) in language models.&lt;/p>
&lt;p>RAG is a cutting-edge approach in natural language processing (NLP). It uses information retrieval and language generation models. We describe how it can enhance what AI can do to understand, retrieve, and generate human-like text.&lt;/p>
&lt;h3 id="more-about-rag">More about RAG&lt;/h3>
&lt;p>Think of RAG as a system that finds relevant knowledge from a vast database. It takes your query, finds the best available information, and then provides an answer.&lt;/p></description></item><item><title>Optimizing an Open Source Vector Database with Andrey Vasnetsov</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/open-source-vector-search-engine-vector-database/</link><pubDate>Wed, 10 Jan 2024 16:04:57 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/open-source-vector-search-engine-vector-database/</guid><description>&lt;h1 id="optimizing-open-source-vector-search-strategies-from-andrey-vasnetsov-at-qdrant">Optimizing Open Source Vector Search: Strategies from Andrey Vasnetsov at Qdrant&lt;/h1>
&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;For systems like Qdrant, scalability and performance in my opinion, is much more important than transactional consistency, so it should be treated as a search engine rather than database.&amp;rdquo;&lt;/em>&lt;br>
&amp;ndash; Andrey Vasnetsov&lt;/p>
&lt;/blockquote>
&lt;p>Discussing core differences between search engines and databases, Andrey underlined the importance of application needs and scalability in database selection for vector search tasks.&lt;/p>
&lt;p>Andrey Vasnetsov, CTO at Qdrant is an enthusiast of &lt;a href="https://qdrant.tech/" target="_blank" rel="noopener nofollow">Open Source&lt;/a>, machine learning, and vector search. He works on Open Source projects related to &lt;a href="https://qdrant.tech/articles/vector-similarity-beyond-search/" target="_blank" rel="noopener nofollow">Vector Similarity Search&lt;/a> and Similarity Learning. He prefers practical over theoretical, working demo over arXiv paper.&lt;/p></description></item><item><title>Vector Search Complexities: Insights from Projects in Image Search and RAG - Noé Achache | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-image-search-rag/</link><pubDate>Tue, 09 Jan 2024 13:51:26 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-image-search-rag/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;I really think it&amp;rsquo;s something the technology is ready for and would really help this kind of embedding model jumping onto the text search projects.”&lt;/em>&lt;br>
&amp;ndash; Noé Achache on the future of image embedding&lt;/p>
&lt;/blockquote>
&lt;p>Exploring the depths of vector search? Want an analysis of its application in image search and document retrieval? Noé got you covered.&lt;/p>
&lt;p>Noé Achache is a Lead Data Scientist at Sicara, where he worked on a wide range of projects mostly related to computer vision, prediction with structured data, and more recently LLMs.&lt;/p></description></item><item><title>How to Superpower Your Semantic Search Using a Vector Database Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/semantic-search-vector-database/</link><pubDate>Tue, 09 Jan 2024 12:27:18 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/semantic-search-vector-database/</guid><description>&lt;h1 id="how-to-superpower-your-semantic-search-using-a-vector-database-with-nicolas-mauti">How to Superpower Your Semantic Search Using a Vector Database with Nicolas Mauti&lt;/h1>
&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;We found a trade off between performance and precision in Qdrant’s that were better for us than what we can found on Elasticsearch.”&lt;/em>&lt;br>
&amp;ndash; Nicolas Mauti&lt;/p>
&lt;/blockquote>
&lt;p>Want precision &amp;amp; performance in freelancer search? Malt&amp;rsquo;s move to the Qdrant database is a masterstroke, offering geospatial filtering &amp;amp; seamless scaling. How did Nicolas Mauti and the team at Malt identify the need to transition to a retriever-ranker architecture for their freelancer matching app?&lt;/p></description></item><item><title>Building LLM Powered Applications in Production - Hamza Farooq | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/llm-complex-search-copilot/</link><pubDate>Tue, 09 Jan 2024 12:16:22 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/llm-complex-search-copilot/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;There are 10 billion search queries a day, estimated half of them go unanswered. Because people don&amp;rsquo;t actually use search as what we used.”&lt;/em>&lt;br>
&amp;ndash; Hamza Farooq&lt;/p>
&lt;/blockquote>
&lt;p>How do you think Hamza&amp;rsquo;s background in machine learning and previous experiences at Google and Walmart Labs have influenced his approach to building LLM-powered applications?&lt;/p>
&lt;p>Hamza Farooq, an accomplished educator and AI enthusiast, is the founder of Traversaal.ai. His journey is marked by a relentless passion for AI exploration, particularly in building Large Language Models. As an adjunct professor at UCLA Anderson, Hamza shapes the future of AI by teaching cutting-edge technology courses. At Traversaal.ai, he empowers businesses with domain-specific AI solutions, focusing on conversational search and recommendation systems to deliver personalized experiences. With a diverse career spanning academia, industry, and entrepreneurship, Hamza brings a wealth of experience from time at Google. His overarching goal is to bridge the gap between AI innovation and real-world applications, introducing transformative solutions to the market. Hamza eagerly anticipates the dynamic challenges and opportunities in the ever-evolving field of AI and machine learning.&lt;/p></description></item><item><title>Building a High-Performance Entity Matching Solution with Qdrant - Rishabh Bhardwaj | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/entity-matching-qdrant/</link><pubDate>Tue, 09 Jan 2024 11:53:56 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/entity-matching-qdrant/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;When we were building proof of concept for this solution, we initially started with Postgres. But after some experimentation, we realized that it basically does not perform very well in terms of recall and speed&amp;hellip; then we came to know that Qdrant performs a lot better as compared to other solutions that existed at the moment.”&lt;/em>&lt;br>
&amp;ndash; Rishabh Bhardwaj&lt;/p>
&lt;/blockquote>
&lt;p>How does the HNSW (Hierarchical Navigable Small World) algorithm benefit the solution built by Rishabh?&lt;/p></description></item><item><title>FastEmbed: Fast &amp; Lightweight Embedding Generation - Nirant Kasliwal | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/fast-embed-models/</link><pubDate>Tue, 09 Jan 2024 11:38:59 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/fast-embed-models/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;When things are actually similar or how we define similarity. They are close to each other and if they are not, they&amp;rsquo;re far from each other. This is what a model or embedding model tries to do.”&lt;/em>&lt;br>
&amp;ndash; Nirant Kasliwal&lt;/p>
&lt;/blockquote>
&lt;p>Heard about FastEmbed? It&amp;rsquo;s a game-changer. Nirant shares tricks on how to improve your embedding models. You might want to give it a shot!&lt;/p>
&lt;p>Nirant Kasliwal, the creator and maintainer of FastEmbed, has made notable contributions to the Finetuning Cookbook at OpenAI Cookbook. His contributions extend to the field of Natural Language Processing (NLP), with over 5,000 copies of the NLP book sold.&lt;/p></description></item><item><title>When music just doesn't match our vibe, can AI help? - Filip Makraduli | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/human-language-ai-models/</link><pubDate>Tue, 09 Jan 2024 10:44:20 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/human-language-ai-models/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;Was it possible to somehow maybe find a way to transfer this feeling that we have this vibe and get the help of AI to understand what exactly we need at that moment in terms of songs?”&lt;/em>&lt;br>
&amp;ndash; Filip Makraduli&lt;/p>
&lt;/blockquote>
&lt;p>Imagine if the recommendation system could understand spoken instructions or hummed melodies. This would greatly impact the user experience and accuracy of the recommendations.&lt;/p>
&lt;p>Filip Makraduli, an electrical engineering graduate from Skopje, Macedonia, expanded his academic horizons with a Master&amp;rsquo;s in Biomedical Data Science from Imperial College London.&lt;/p></description></item><item><title>Binary Quantization - Andrey Vasnetsov | Vector Space Talks</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/binary-quantization/</link><pubDate>Tue, 09 Jan 2024 10:30:10 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/binary-quantization/</guid><description>&lt;blockquote>
&lt;p>&lt;em>&amp;ldquo;Everything changed when we actually tried binary quantization with OpenAI model.”&lt;/em>&lt;br>
&amp;ndash; Andrey Vasnetsov&lt;/p>
&lt;/blockquote>
&lt;p>Ever wonder why we need quantization for vector indexes? Andrey Vasnetsov explains the complexities and challenges of searching through proximity graphs. Binary quantization reduces storage size and boosts speed by 30x, but not all models are compatible.&lt;/p>
&lt;p>Andrey worked as a Machine Learning Engineer most of his career. He prefers practical over theoretical, working demo over arXiv paper. He is currently working as the CTO at Qdrant a Vector Similarity Search Engine, which can be used for semantic search, similarity matching of text, images or even videos, and also recommendations.&lt;/p></description></item><item><title>Loading Unstructured.io Data into Qdrant from the Terminal</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-unstructured/</link><pubDate>Tue, 09 Jan 2024 00:41:38 +0530</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-unstructured/</guid><description>&lt;p>Building powerful applications with Qdrant starts with loading vector representations into the system. Traditionally, this involves scraping or extracting data from sources, performing operations such as cleaning, chunking, and generating embeddings, and finally loading it into Qdrant. While this process can be complex, Unstructured.io includes Qdrant as an ingestion destination.&lt;/p>
&lt;p>In this blog post, we&amp;rsquo;ll demonstrate how to load data into Qdrant from the channels of a Discord server. You can use a similar process for the &lt;a href="https://unstructured-io.github.io/unstructured/ingest/source_connectors.html" target="_blank" rel="noopener nofollow">20+ vetted data sources&lt;/a> supported by Unstructured.&lt;/p></description></item><item><title>Chat with a codebase using Qdrant and N8N</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-n8n/</link><pubDate>Sat, 06 Jan 2024 04:09:05 +0530</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-n8n/</guid><description>&lt;p>n8n (pronounced n-eight-n) helps you connect any app with an API. You can then manipulate its data with little or no code. With the Qdrant node on n8n, you can build AI-powered workflows visually.&lt;/p>
&lt;p>Let&amp;rsquo;s go through the process of building a workflow. We&amp;rsquo;ll build a chat with a codebase service.&lt;/p>
&lt;h2 id="prerequisites">Prerequisites&lt;/h2>
&lt;ul>
&lt;li>A running Qdrant instance. If you need one, use our &lt;a href="https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/documentation/quick-start/">Quick start guide&lt;/a> to set it up.&lt;/li>
&lt;li>An OpenAI API Key. Retrieve your key from the &lt;a href="https://platform.openai.com/account/api-keys" target="_blank" rel="noopener nofollow">OpenAI API page&lt;/a> for your account.&lt;/li>
&lt;li>A GitHub access token. If you need to generate one, start at the &lt;a href="https://github.com/settings/tokens/" target="_blank" rel="noopener nofollow">GitHub Personal access tokens page&lt;/a>.&lt;/li>
&lt;/ul>
&lt;h2 id="building-the-app">Building the App&lt;/h2>
&lt;p>Our workflow has two components. Refer to the &lt;a href="https://docs.n8n.io/workflows/create/" target="_blank" rel="noopener nofollow">n8n quick start guide&lt;/a> to get acquainted with workflow semantics.&lt;/p></description></item><item><title>"Vector search and applications" by Andrey Vasnetsov, CTO at Qdrant</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-search-and-applications-record/</link><pubDate>Mon, 11 Dec 2023 12:16:42 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/vector-search-and-applications-record/</guid><description>&lt;!--StartFragment-->
&lt;p>Andrey Vasnetsov, Co-founder and CTO at Qdrant has shared about vector search and applications with Learn NLP Academy. &lt;/p>
&lt;iframe width="560" height="315" src="https://www.youtube.com/embed/MVUkbMYPYTE" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen>&lt;/iframe>
&lt;p>He covered the following topics:&lt;/p>
&lt;ul>
&lt;li>Qdrant search engine and Quaterion similarity learning framework;&lt;/li>
&lt;li>Similarity learning to multimodal settings;&lt;/li>
&lt;li>Elastic search embeddings vs vector search engines;&lt;/li>
&lt;li>Support for multiple embeddings;&lt;/li>
&lt;li>Fundraising and VC discussions;&lt;/li>
&lt;li>Vision for vector search evolution;&lt;/li>
&lt;li>Finetuning for out of domain.&lt;/li>
&lt;/ul>
&lt;!--EndFragment--></description></item><item><title>From Content Quality to Compression: The Evolution of Embedding Models at Cohere with Nils Reimers</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/cohere-embedding-v3/</link><pubDate>Sun, 19 Nov 2023 12:48:36 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/cohere-embedding-v3/</guid><description>&lt;p>For the second edition of our Vector Space Talks we were joined by none other than Cohere’s Head of Machine Learning Nils Reimers.&lt;/p>
&lt;h2 id="key-takeaways">Key Takeaways&lt;/h2>
&lt;p>Let&amp;rsquo;s dive right into the five key takeaways from Nils&amp;rsquo; talk:&lt;/p>
&lt;ol>
&lt;li>
&lt;p>Content Quality Estimation: Nils explained how embeddings have traditionally focused on measuring topic match, but content quality is just as important. He demonstrated how their model can differentiate between informative and non-informative documents.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>Compression-Aware Training: He shared how they&amp;rsquo;ve tackled the challenge of reducing the memory footprint of embeddings, making it more cost-effective to run vector databases on platforms like &lt;a href="https://cloud.qdrant.io/login" target="_blank" rel="noopener nofollow">Qdrant&lt;/a>.&lt;/p></description></item><item><title>Pienso &amp; Qdrant: Future Proofing Generative AI for Enterprise-Level Customers</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pienso/</link><pubDate>Tue, 28 Feb 2023 09:48:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-pienso/</guid><description>&lt;p>The partnership between Pienso and Qdrant is set to revolutionize interactive deep learning, making it practical, efficient, and scalable for global customers. Pienso&amp;rsquo;s low-code platform provides a streamlined and user-friendly process for deep learning tasks. This exceptional level of convenience is augmented by Qdrant’s scalable and cost-efficient high vector computation capabilities, which enable reliable retrieval of similar vectors from high-dimensional spaces.&lt;/p>
&lt;p>Together, Pienso and Qdrant will empower enterprises to harness the full potential of generative AI on a large scale. By combining the technologies of both companies, organizations will be able to train their own large language models and leverage them for downstream tasks that demand data sovereignty and model autonomy. This collaboration will help customers unlock new possibilities and achieve advanced AI-driven solutions.
Strengthening LLM Performance&lt;/p></description></item><item><title>Powering Bloop semantic code search</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-bloop/</link><pubDate>Tue, 28 Feb 2023 09:48:00 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/case-study-bloop/</guid><description>&lt;p>Founded in early 2021, &lt;a href="https://bloop.ai/" target="_blank" rel="noopener nofollow">bloop&lt;/a> was one of the first companies to tackle semantic
search for codebases. A fast, reliable Vector Search Database is a core component of a semantic
search engine, and bloop surveyed the field of available solutions and even considered building
their own. They found Qdrant to be the top contender and now use it in production.&lt;/p>
&lt;p>This document is intended as a guide for people who intend to introduce semantic search to a novel
field and want to find out if Qdrant is a good solution for their use case.&lt;/p></description></item><item><title>Qdrant supports ARM architecture!</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-supports-arm-architecture/</link><pubDate>Wed, 21 Sep 2022 09:49:53 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-supports-arm-architecture/</guid><description>&lt;p>The processor architecture is a thing that the end-user typically does not care much about, as long as all the applications they use run smoothly. If you use a PC then chances are you have an x86-based device, while your smartphone rather runs on an ARM processor. In 2020 Apple introduced their ARM-based M1 chip which is used in modern Mac devices, including notebooks. The main differences between those two architectures are the set of supported instructions and energy consumption. ARM’s processors have a way better energy efficiency and are cheaper than their x86 counterparts. That’s why they became available as an affordable alternative in the hosting providers, including the cloud.&lt;/p></description></item><item><title>Qdrant has joined NVIDIA Inception Program</title><link>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-joined-nvidia-inception-program/</link><pubDate>Mon, 04 Apr 2022 12:06:36 +0000</pubDate><author>info@qdrant.tech (Andrey Vasnetsov)</author><guid>https://deploy-preview-2138--condescending-goldwasser-91acf0.netlify.app/blog/qdrant-joined-nvidia-inception-program/</guid><description>&lt;p>Recently we&amp;rsquo;ve become a member of the NVIDIA Inception. It is a program that helps boost the evolution of technology startups through access to their cutting-edge technology and experts, connects startups with venture capitalists, and provides marketing support.&lt;/p>
&lt;p>Along with the various opportunities it gives, we are the most excited about GPU support since it is an essential feature in Qdrant&amp;rsquo;s roadmap.
Stay tuned for our new updates.&lt;/p></description></item></channel></rss>