KubeCon + Cloud Native Con is happening this week in Salt Lake City, UT, bringing together the Kubernetes community in one location, and providing the opportunity for companies in the space to launch new offerings and update their products.Â
We’ve collected the news announcements from those companies all in one place so you can stay up to date. Keep checking back here, as we will be updating this list as news comes in.Â
Last updated: 11/12 at 1:08 PM ET
Cloud Native Computing Foundation Announces cert-manager Graduation
The Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud native software, today announced the graduation of cert-manager.
cert-manager helps cloud native developers automate Transport Layer Security (TLS) and Mutual Transport Layer Security (mTLS) certificate issuance and renewal. It ensures secure communication within distributed systems by automating and simplifying the issuance, renewal, and lifecycle management of X.509 certificates in Kubernetes platforms. This eliminates the manual process of generating and managing certificates and helps ensure systems remain secure without constant manual intervention.
“By making it easier for developers to obtain, manage, and automate security certificates, cert-manager helps ensure applications remain secure throughout their lifecycles, making the ecosystem more secure as a whole,†said Chris Aniszczyk, CTO of CNCF. “We’re thrilled to see the project reach this milestone and look forward to it continuing to improve the cloud native security space.â€
cert-manager was created in 2017 at Jetstack, which is now a part of Venafi, a CyberArk company. It was accepted into the CNCF Sandbox in November 2020, and, over the past four years, has continued to grow, bringing in new maintainers, expanding its user base, and adding key features in response to community needs. It has built a network of more than 450 contributors and issued more than 200 releases. It moved to the Incubating maturity level in 2022 and today plays a vital role in the CNCF ecosystem by integrating with other projects like Kubernetes, SPIFFE, Istio, Prometheus, and Envoy to strengthen cloud native infrastructure security across diverse environments.
The project’s roadmap includes support for ACME Renewal Information (ARI), which will provide a cleaner method for renewing certificates using the ACME protocol, as well as an aim to shrink cert-manager’s core components, minimizing the surface area of cert-manager to reduce the attack surface, binary size, container size and complexity, and enabling best practice PKI management.
Fluent bit v3.2: Faster, Lighter Telemetry Agent & Processor
At KubeCon + CloudNativeCon North America, the Fluent community announced Fluent Bit v3.2, delivering better performance, increased efficiency, and new capabilities with OpenTelemetry (OTel), YAML, and eBPF. Fluent Bit is a CNCF-graduated project under the umbrella of Fluentd, alongside other foundational technologies such as Kubernetes and Prometheus. Fluent Bit hit 1 billion downloads in 2022 and has since exploded to surpass 15 billion downloads in the last year.
Fluent Bit v3.2 highlights its continued focus on performance and extensibility. It delivers new capabilities enabling users to seamlessly collect and manage skyrocketing data volumes and diverse data types, including support for new signals. The v3.2 release builds on Fluent Bit’s foundation as a universal telemetry agent with key new capabilities:
- Performance Improvements upon industry-leading speed:
-
-
- Updated JavaScript Object Notation (JSON) encoder with Single Instruction Multi Data (SIMD) support provides improvements for intensive workloads. Recent benchmarks have shown up to a 30% decrease in CPU, a 15% decrease in memory usage, and a 15% decrease in energy consumption. SIMD support for log processing and parsing comes out of the box, providing users with immediate performance benefits without any additional work.
- The new capabilities in v3.2 build on Fluent Bit’s core with continued default multi-threading for inputs, outputs, and processing of multiple observability signal types – logs, metrics and traces
-
- More signal type support with Blob & eBPF:Â
-
- With v3.2, Fluent Bit expands beyond telemetry data. Now, users can collect and move massive files, including photos and videos, to storage destinations such as Azure Blob. This has specific applications for IoT and AI use cases, where videos and photos are leveraged to assist training AI models.Â
- Provides support for Extended Berkeley Packet Filter (eBPF), unlocking security, and advanced observability use cases. It introduces out-of-the-box eBPF capabilities and allows users to plug in their own eBPF programs. It also includes new integrations that allow users to plug in other CNCF eBPF projects, such as Falco and Tracee, for security use cases.
- Increased Compatibility (OTel and YAML):Â
-
- The Fluent Bit agent can now collect data and leverage the OTel Envelope processor to convert the logs to the correct format for any OTel backend. With OTel becoming the de-facto protocol standard for observability, Fluent Bit continues its integration and standardization with increased compatibility across Logs, Metrics, and Traces.Â
- Now, v3.2 includes full support for YAML–the standard for Kubernetes configuration—in every part of the Fluent Bit pipeline: parsers, configuration, processors, and settings. This allows a single unified configuration language across both Fluent Bit and Kubernetes resources.
“While Fluent Bit throughput and resource usage are already best in class, v3.2 introduces massive performance upgrades, new ecosystem integrations, and signal support. From the beginning, Fluent Bit was built to integrate with best-in-class technologies, open source standards, and with a commitment to vendor neutrality. It enables users to build the best tech stack for them,† said Eduardo Silva Pereira, original creator of Fluent Bit and Engineering Manager at Chronosphere. “Fluent Bit v3.2 brings us close to delivering upon that vision.â€
Red Hat adds new AI capabilities for Red Hat Developer Hub
Red Hat today announced new capabilities and enhancements for Red Hat Developer Hub, the company’s enterprise-grade internal developer platform based on the Backstage project.Â
The new features are designed to help organizations, whether already implementing an AI strategy or just coming to grips with its possibilities, more quickly and easily harness the power of AI to deliver smarter applications and services to their customers and end-users.
To help accelerate developer competencies for building AI-enabled applications, Red Hat Developer Hub is introducing five new AI-focused software templates for organizations to get started developing applications for common AI use cases.Â
The new templates include:
- Audio to text application: An AI-enabled audio transcription application where users can upload an audio file to be transcribed.
- Chatbot application: An LLM-enabled chat application to create a bot that replies with AI-generated responses.
- Code generation application: An LLM-enabled code generation application for a specialized bot that helps with code related queries.
- Object detection application: Enables developers to upload an image to identify and locate objects in the image.
- Retrieval Augmented Generation (RAG) chatbot application: Enables developers to embed files containing relevant information to allow the model to provide more accurate responses.
With Red Hat Developer Hub’s integration with Red Hat OpenShift, organizations can also more easily deploy their applications to the platform.
As AI assets within organizations grow exponentially, having a central resource to locate, manage and access these vital assets is paramount to enabling developers to move more quickly. With the Red Hat Developer Hub’s software catalog, developers and platform engineers are able to record and share the details of their organization’s AI assets, LLMs, AI servers and associated APIs and more.Â
The latest AI enhancements are generally available with Red Hat Developer Hub.
Observe introduces AI capabilities to troubleshoot faster in Kubernetes environments
Observability platform provider Observe, Inc. today launched Kubernetes Explorer, designed to simplify visualizing and troubleshooting for cloud-native environments. Kubernetes Explorer enables DevOps teams, site reliability engineers (SREs) and software engineers to easily understand disparate Kubernetes components, detect issues quickly, uncover root causes and resolve them faster than ever before.
According to the 2024 Gartner Critical Capabilities for Container Management report, “by 2027, more than 75% of all AI deployments will use container technology as the underlying compute environment, up from less than 50% today.†As Kubernetes adoption continues to grow, driven by AI and edge computing trends, the complexity of observing distributed applications and infrastructure has increased. Observe addresses this challenge by unifying fragmented data across metrics, traces, and logs, providing insights that span applications, the Kubernetes platform, and cloud-native infrastructure.
Observe’s AI Investigator tightly integrates with Kubernetes Explorer to create custom, incident-specific visualizations and suggestions, providing on-call engineers with an expert Kubernetes assistant while troubleshooting. Observe launched its new AI Investigator – based on an agentic AI approach – last month as part of its most significant product update to date, along with $145 million in Series B funding.
Additional Kubernetes Explorer features include:
- Kubernetes Hindsight: Provides historical visibility so teams can do retrospective analysis and performance optimization in ephemeral container environments.
- Cluster Optimization: Offers a visual map of workload distribution across the Kubernetes cluster, enabling quick identification of underutilized capacity and optimization of resources. This capability is crucial as the latest CNCF cloud-native FinOps survey found half of organizations overspend on Kubernetes infrastructure, primarily due to over-provisioning.
- Resource Descriptors: Delivers comprehensive visibility into full YAML configurations of Kubernetes resources, maintaining deployment descriptor history for easy version comparison.Â
For more information about Kubernetes Explorer, visit www.observeinc.com.
Komodor Introduces Single Pane of Glass K8s Management Solution
Komodor announced a new version of its platform that extends its existing Kubernetes management capabilities to support the full ecosystem of K8 add-ons (including popular CRDs and operators).Â
Komodor now enables Platform Engineering teams and developers to visualize, operate, detect, investigate, remediate and optimize all the components in Kubernetes clusters including workloads, native resources and its complex ecosystem of add-ons. The company will demonstrate the Komodor platform at KubeCon 2024 booth R9.
As Kubernetes adoption grows, so does an organization’s reliance on add-ons such as package managers, workflow automation, data streaming and networking – that extend its core functionalities. These tools are vital, but require specialized expertise to manage and can introduce operational risks when misconfigured. Komodor centralizes and automates the daily operation, health management and troubleshooting of issues associated with add-ons along with native Kubernetes resources, to prevent cascading failures, latency, and performance degradation and enhance long-term reliability.Â
One example is cert-manager (the leading certificate manager add-on), which is present in virtually every Kubernetes environment. When misconfigured, certificates can expire unnoticed, leading to application outages. Komodor’s automated detection and root cause analysis not only identifies these issues before they can impact operations, but also provides a clear path to remediation, saving hours of manual troubleshooting and avoiding downtime.Â
“Kubernetes has evolved from a container orchestration platform into a sprawling ecosystem that requires a multitude of add-ons—ranging from autoscaling and security to storage and networking—to meet modern operational demands,†said Itiel Shwartz, Co-Founder & CTO of Komodor.Â
The new capabilities include:
- Centralized Visibility & Management: Komodor provides a central console for visibility and control over all Kubernetes add-ons. This single pane of glass simplifies daily operations and enables DevOps engineers to understand how each add-on interacts with other assets in their environment.
- Proactive Risk Discovery & Automated Troubleshooting: Using Komodor’s proprietary technology and AI-driven root cause analysis, the new capabilities provide out-of-the-box detection of pending issues before they impact operations with real-time alerts and actionable insights. Whether it’s a misconfigured cert-manager causing certificate renewal failures or a failing autoscaler, Komodor rapidly pinpoints the root cause of issues and offers intuitive, automated remediation playbooks.
- Reduced Operational Complexity: By automating the root cause analysis of issues, Komodor reduces the complexity associated with manually maintaining multiple add-ons, shortens mean time to repair (MTTR), and enables developers to fix problems on their own. Â
Mezmo unveils Mezmo Flow for guided data onboarding and log volume optimization
Mezmo today unveiled Mezmo Flow, a guided experience for building telemetry pipelines. With Mezmo Flow, users can quickly onboard new log sources, profile data, and implement recommended optimizations with a single click, to reduce log volumes by more than 40%. With this release, Mezmo enables next generation log management, a pipeline-first log analysis solution that helps companies control incoming data volumes, identify the most valuable data, and glean insights faster, without the need to index data in expensive observability tools.
Developers should not have to choose between how much they can log and how fast they can debug and troubleshoot issues, especially with custom applications. SREs need an easy way to understand logs, monitor any data spikes, solve any infrastructure issues, and easily provision data to downstream teams and systems. The new release from Mezmo streamlines both developer and SRE workflows.
With Mezmo Flow, users can create their first log volume reduction pipeline in less than 15 minutes, retaining the most valuable data and preventing unnecessary charges, overages, and spikes. Next generation log management is a pipeline-first log analysis that improves the quality of critical application logs to improve signal-to-noise ratio for increased developer productivity. Alerts and notifications on data in motion can help users take timely actions for accidental application log volume spikes or changes in metrics.
As part of its recent release, Mezmo is also introducing a series of new capabilities to simplify action and control for developers and SREs. These include:Â
- Data profiler enhancements: Analyze and understand structured and unstructured logs while continuously monitoring log volume trends across applications.
- Processor groups: Create multifunctional, reusable pipeline components, improving pipeline development time and ensuring standardization and governance over data management.Â
- Shared resources: Configure sources once and use them for multiple pipelines. This ensures data is delivered to the right users in their preferred tools with as little overhead as possible.
- Data aggregation for insights: Collect and aggregate telemetry metrics such as log volume or errors per application, host, and user-defined label. The aggregated data is available as interactive reports to gain insights such as application log volume or error trends and can be used to detect anomalies such as volume surges and alert users to help prevent overages.
Visit our IT Ops roundup here.
The post What’s new from KubeCon + Cloud Native Con North America 2024 appeared first on SD Times.
Source: Read MoreÂ