Skip to content

Latest commit

 

History

History
182 lines (114 loc) · 10.7 KB

File metadata and controls

182 lines (114 loc) · 10.7 KB

Sponsors

Zilliz / Milvus

Tip

Zilliz has a generous free-tier for their cloud hosted vector database service, Zilliz Cloud. See below for details on how to access.

Milvus is a distributed vector database developed by Zilliz. It is available as both open-source software and a cloud service. Milvus is an open-source project under LF AI & Data Foundation distributed under the Apache License 2.0.

There are two different options for getting started with Milvus. You can choose to use the open source Milvus locally, or use the free tier of Zilliz Cloud:

Milvus (Self-Hosted)

Milvus is an open-source vector database built to power embedding similarity search and AI applications. Milvus makes unstructured data search more accessible, and provides a consistent user experience regardless of the deployment environment. Milvus is released under the open-source Apache License 2.0 and is a graduate project under LF AI & Data Foundation. To get started visit: https://milvus.io/docs.

Zilliz Cloud (Free Tier)

Zilliz Cloud is the managed version of Milvus. You can sign up for a free account and get access to TWO Collections that can store up to 500,000 vectors at 768 dimensions for each collection. Find out more here: https://cloud.zilliz.com/signup.

Resources

Community

Twelve Labs

Tip

Twelve Labs is providing 10 hours of free credits for all attendees to play with our video embedding and video-language models.

Twelve Labs builds multimodal foundation models that generate powerful vector embeddings to enable a wide range of downstream video understanding applications.

  • Video embedding model: This model, named Marengo, converts videos into multimodal video embeddings that enable fast and scalable task execution without storing the entire video. Marengo has been trained on a vast amount of video data, and it can recognize entities, actions, patterns, movements, objects, scenes, and other elements present in videos. By integrating information from different modalities, the model can be used for several downstream tasks, such as search using natural language queries.
  • Video language model: This model, named Pegasus, bridges the gap between visual and textual understanding by integrating text and video data in a common embedding space. The platform uses this model for tasks that involve generating or understanding natural language in the context of video content, such as summarizing videos and answering questions.

Built by developers, for developers, our APIs provide access to these advanced multimodal foundation models, enabling capabilities such as:

  • Powerful semantic search: Find exact moments within any video using natural language queries, without the need for tags or metadata.
  • Video-to-text generation: Generate deep analyses, video specific Q&A, or general highlight generation for any video content.
  • Zero-shot classification: Utilize natural language to create your custom taxonomies, allowing for precise and efficient video classification tailored to your unique use case.
  • Intuitive integration: Embed our video understanding models into your application with just a few API calls.
  • Rapid result retrieval: Obtain results within seconds.
  • Scalability: Our cloud-native distributed infrastructure effortlessly handles thousands of concurrent requests.

Resources

  • Quickstart tutorial and API Reference: Built on top of our state-of-the-art multimodal foundation model optimized for videos, the platform enables you to add rich, contextual video understanding to your applications through developer-friendly APIs.
  • Twelve Labs Recipes: These will give you a running start on commonly used endpoints.
  • Twelve Labs Sample Apps: These sample apps showcase different use cases developers have built with our API.

Twelve Labs provides the following client SDKs that enable you to integrate and utilize the platform within your application:

Additional Links

Here’s how to stay connected with our community:

Here’s where we publish content:

Arize AI

Arize AI is a unified AI observability and LLM evaluation platform that helps teams develop and maintain more successful AI. Arize’s automated monitoring and observability platform allows teams to quickly detect issues when they emerge, troubleshoot why they happened, and improve overall performance across both traditional ML and generative use cases. Arize is headquartered in Berkeley, CA.

Phoenix

Phoenix is an open-source observability library designed for experimentation, evaluation, and troubleshooting. It allows AI Engineers and Data Scientists to quickly visualize their data, evaluate performance, track down issues, and export data to improve.

Phoenix is built by Arize AI, the company behind the industry-leading AI observability platform, and a set of core contributors.

Resources

Social

StreamNative

Tip

StreamNative is offering $200 in free credits for their cloud data platform.

StreamNative, founded by the creators of Apache Pulsar, is redefining real-time data streaming. Our platform empowers organizations to process and analyze massive data streams at scale with unparalleled efficiency. At its core is the URSA engine, which seamlessly integrates Apache Pulsar and Apache Kafka, delivering unmatched compatibility and performance. By simplifying deployments and enabling direct integration with modern lakehouse architectures, URSA helps businesses innovate faster and more effectively. With our cost-efficient solutions, companies can build cutting-edge, real-time applications that drive measurable outcomes.

Resources

You can take advantage of $200 free credit to get started with StreamNative. Create your free account at the Cloud Console then free credit will be automatically applied to your account.

Additional Links

Here’s how to stay connected with our community:

OmniStack

Tip

OmniStack is providing over $500 in free inference credits for the duration of the hackathon.

OmniStack is a developer platform that accelerates AI integration into applications and makes them production-ready by providing essential tools like workflow building, observability, evals, failover, and model deployment. It also provides access to 100+ models from all major providers.

You can use the OmniStack platform to run 100+ pre-deployed models, including LLAMA, or to deploy fine-tuned models.

We currently support models with image and text inputs and text outputs, and we don't yet support models with image and audio outputs.

Please make sure to fill out this https://forms.gle/emGYPepgmSGghqcA7 form to get an extra $500 for the duration of the hackathon.

Using third-party models:

Deploying fine-tuned/uncensored models:

We currently support models with image and text inputs and text outputs, and we don't yet support models with image and audio outputs.

Additional Links:

AWS

Important

AWS is awarding $10,000 to the winning team! Think of what you could accomplish with that much compute...

Amazon Web Services (AWS) is a subsidiary of Amazon that provides on-demand cloud computing platforms and APIs to individuals, companies, and governments, on a metered, pay-as-you-go basis.

Resources

Mistral

Important

Mistral is awarding $500 in credits to the top team that uses Mistral technology in their submission.

Mistral AI, headquartered in Paris, France specializes in artificial intelligence (AI) products and focuses on open-weight large language models (LLMs), an alternative to proprietary models. Learn more at mistral.ai.

Resources