Elastic and Hugging Face Team Up For Enhanced AI Development

0

Search AI company Elastic says Elasticsearch Open Inference API now supports Hugging Face models with native chunking through the integration of the semantic_text field.

Developers can now quickly ship generative AI applications without the burden of writing custom chunking logic, leveraging the Elasticsearch Open Inference API integration with Hugging Face Inference Endpoints.

“Combining Hugging Face’s embeddings with Elastic’s retrieval relevance tools helps users gain better insights and improve search functionality,” said Hugging Face’s Jeff Boudie. “Hugging Face makes it easy for developers to build their own AI. With this integration, developers get a complete solution to leverage the best open models for semantic search, hosted on Hugging Face multi-cloud GPU infrastructure, to build semantic search experiences in Elasticsearch without worrying about storing or chunking embeddings.”

“Developers are at the heart of our business, and extending more of our GenAI and search primitives to Hugging Face developers deepens our collaboration,” said Elastic’s Matt Riley. “The integration of our new semantic_text field, simplifies the process of chunking and storing embeddings, so developers can focus on what matters most, building great applications.”

The integration of semantic_text support follows the addition of Hugging Face embeddings models to Elastic’s Open Inference API.

Share.

Comments are closed.