Ημερομηνία/Ώρα
Date(s) - 24/06/2025 - 15/07/2025
19:00 - 19:15
Κατηγορία(ες) Δεν υπάρχουν κατηγορίες
No single company has an accurate map of the future. The most accurate and resilient vision of what’s ahead is an open one – rooted in collaboration and empowered by AI. Your AI enterprise will consist of many moving parts, but at its core, it must be grounded in a few key principles: the democratization of data through accessible training tools that lower barriers to entry; the use of methods that enhance resource efficiency; robust lifecycle management of models at an enterprise scale; and the flexibility to extend your AI capabilities wherever your mission leads. Together, these pillars form a foundation for sustainable, scalable, and adaptable AI practice.
6/24 – Session 1: Lowering Barriers with Instructlab
- Tuning Generative AI models is cost prohibitive, demanding highly skilled resources to tune mission-centric data at scale and across hybrid cloud and edge environments.
- InstructLab offers an open source methodology for tuning LLMs of your choice and easing the burdens of creating synthetic data with far fewer computing resources and lowering the costs
7/1 – Session 2: Driving Efficiency with vLLM
- Serving LLMs is resource-intensive with high demands on hardware with hefty price tags to meet the scalability and speed that agencies demand
- (virtualized)LLMs allows agencies to “do more with less” offers LLM inferencing and serving with greater efficiency, scale, and speeds up to 24x higher throughput.
7/8 – Session 3: Managing Model Lifecycles with OpenShift AI
- Cross-functional AI teams want self-service access to workspaces, on available or GPU accelerated computing resources, integrated with choice of tools to collaborate and quickly get to production without the toil and friction with current approaches.
- OpenShift AI provides a platform that streamlines and automates the MLOps lifecycle with pipelines for Generative and Predictive AI models from data acquisition and preparation, model training and fine-tuning, through model serving and monitoring with consistency from the edge through hybrid clouds.
7/15 – Session 4: Inferencing at the Edge
- Agencies demand actionable intelligence where the mission occurs, but managing a distributed network of AI-enabled edge devices in constrained/disconnected environments bring significant operational complexities.
- Red Hat enables AI inferencing and serving in disconnected, resource-constrained environments by providing lightweight, flexible platforms that allows models to run locally, without relying on cloud connectivity through containerized deployments, efficient resource usage, and secure, automated updates.
Click here to view the agenda
Click here for more info