TMCnet News

Pinecone 2.0 Launches to Take Vector Search From Lab into Production
[September 14, 2021]

Pinecone 2.0 Launches to Take Vector Search From Lab into Production


SAN MATEO, Calif., Sept. 14, 2021 /PRNewswire/ -- Pinecone Systems Inc., a machine learning (ML) cloud infrastructure company, today announced Pinecone 2.0 to combine the power of vector search with traditional metadata storage and filtering. Together with the other new features introduced, including hybrid memory/disk storage, Pinecone 2.0 provides granular search control, ultra-low latencies, and up to a 10x infrastructure cost reduction, making it viable for companies to replace their common keyword-based search and recommendation systems with Deep Learning powered vector search.

Deep Learning (DL), as part of Machine Learning (ML), represents everything as vectors, from documents to videos, to user behavior. This representation makes it possible to find more relevant information from large amounts of data than traditional text-based or rule-based retrieval. Vector search already powers the retrieval and recommendation systems inside tech giants such as Google, Spotify, Facebook, Amazon, Netflix, Pinterest as well as other products/services known for the quality of their search results and recommendations.

Beyond a handful of tech hyperscalers who already use vector search, though, even large enterprise companies can struggle to implement vector search in production. One of the biggest hurdles those companies face is combining filters or business logic with vector-search algorithms without severely degrading results or performance. For example: Enterprise software companies can make their users more productive by helping them find what they need quickly, but not if it creates a laggy experience; media platforms want to provide better content recommendations to drive engagement and retention, but only if it works as fast as their users can scroll. Based on overwhelming demand from the market and its customers, Pinecone has developed low-latency filtering capabilities for more accurate search and recommendation systems.

Pinecone 2.0 allows companies to store metadata (eg. a topic, author, and category) with each item and to filter vector searches by this metadata in a single stage. This provides a much higher degree of control over search results and eliminates the need for slow pre- or post-filtering. End-users see substantially more accurate results and recommendations, at lightning speeds. The metadata engine powering the filters is built into Pinecone's proprietary vector index, which lets it apply text (strings) and numerical (floats) filters directly onto vector-search queries with minimal overhead.



Another key enhancement in Pinecone 2.0 is the newly introduced hybrid storage option. This addresses the other major hurdle for companies eyeing vector search: high operational costs. Vector searches typically run completely in-memory (RAM), and for companies with millions or even billions of items in their catalogs, the memory costs alone can make vector search prohibitively expensive. With a hybrid of RAM and disk, Pinecone cuts compute infrastructure costs for customers by up to 10x while maintaining low latency and the same high degree of accuracy.

"The worlds of search and databases have been fundamentally changed by machine learning and deep learning," said Edo Liberty, Founder and CEO of Pinecone. "Companies are looking at the hyperscalers and waking up to the value of vector search. Pinecone 2.0 will help them realize that value at a fraction of the cost and effort."


Additional updates in the V2.0 announcement are:

  • REST API - The new REST API makes Pinecone more flexible and even easier to use for developers.Users can query vectors using HTTPS and JSON without the need to install anything. The Pinecone REST API also provides maximum flexibility to use the Pinecone service from any environment that can make HTTPS calls without the need to be familiar with Python.
  • New architecture - Pinecone now provides fault tolerance, data persistence, and high availability for customers with billions of items or many thousands of operations per second. Before, enterprises with strict reliability requirements either had to build and maintain complex infrastructure around vector search libraries to meet those requirements or relax their standards and risk downgraded performance for their users. The new architecture is designed to use Kafka and Kubernetes to make the vector database as reliable as any other enterprise-grade database.
  • SOC2: Pinecone is now SOC2 audited. Now enterprises with even the strictest security requirements can deploy Pinecone to production with confidence and assurance that their data is safe.

About Pinecone

Pinecone has built the first vector database to enable the next generation of artificial intelligence (AI) applications in the cloud. Its engineers built ML platforms at AWS (Amazon SageMaker), Yahoo, Google, Databricks, and Splunk, and its scientists published more than 100 academic papers and patents on machine learning, data science, systems, and algorithms. Pinecone is backed by Wing Venture Capital and operates in Silicon Valley, New York and Tel Aviv. For more information, see http://www.pinecone.io.

Pinecone Media Contact:
Lazer Cohen
[email protected]
347-753-8256

Cision View original content:https://www.prnewswire.com/news-releases/pinecone-2-0-launches-to-take-vector-search-from-lab-into-production-301376228.html

SOURCE Pinecone Systems Inc


[ Back To TMCnet.com's Homepage ]