New Confluent platform for Apache Flink makes it easy to manage and secure high-performance stream processing on-premises – CRN
4 min read
Confluent, Inc. announced the general availability of Confluent Platform for Apache Flink with added enterprise-level security capabilities and easier ways to manage and scale on-premises Apache Flink workloads. Confluent is now the only company that offers data streaming paired with stream processing for cloud and on-premises workloads so businesses can turn their data into value faster. In addition, Confluent announced WarpStream Orbit for easier migration to WarpStream’s “Bring Your Own Cloud (BYOC)” deployment model.
“Stream processing is where the magic happens. It transforms real-time data into experiences and operations that drive modern businesses forward,” said Shaun Clowes, Chief Product Officer, Confluent. “With our latest announcement, any organisation can take advantage of Apache Flink — scaling, securing, and managing it with ease — unlocking innovation without limits.”
Stream processing enables businesses to analyse and react to massive amounts of data in real time for decision-making, fraud detection, personalised customer experience, and more. Apache Flink has emerged as the de facto stream processing solution for enterprises with its incredible performance, robust state management, and the flexibility to handle complex, real-time analytics at scale. However, teams often struggle to self-manage Flink because it requires configuring, operating, scaling, and securing a complex distributed system. For organisations with on-premises workloads, there’s a need for a solution that provides the flexibility and benefits of cloud-native technologies within private infrastructures.
Scale, secure, and simplify stream processing for on-prem workloads
With the general availability of Confluent Platform for Apache Flink, organisations can manage on-prem workloads at scale with long-term support from the world’s leading Apache Kafka® and Flink experts. Confluent Platform’s enterprise-grade Flink distribution and control plane enables teams to:
- Streamline lifecycle management with simplified deployment and scaling, enhanced automation, and efficient resource allocation.
- Ensure an integrated security model with unified access controls and consistent security policies across all systems.
- Minimise risk with consolidated Flink and Kafka support and guidance from the foremost experts in the data streaming industry.
Many companies globally are already seeing success by using Confluent Platform for Apache Flink to process and analyse their data. For example, a Fortune 50 telecom customer is using it for real-time analytics to help process and analyse network performance, deliver consistent and personalised customer experiences, and provide network visibility for threat detection. By leveraging Confluent Platform’s Flink offering, the telecom provider has saved tens of millions of dollars and significantly reduced churn, boosting its overall margins.
A new feature of Confluent Platform for Apache Flink, Confluent Manager for Apache Flink (CMF), makes deploying, updating, and scaling Flink as easy on-premises as it is in the cloud. Confluent Manager for Apache Flink enables:
⦁ Simplified management to streamline large-scale Flink deployments in Kubernetes, making resource management and scaling more efficient.
⦁ Enhanced collaboration that centralises management across the Confluent ecosystem. CMF promotes consistency and optimises processes, facilitating better collaboration among teams.
⦁ Improved security with robust security mechanisms, simplifies security management, and ensures compliance with organisational policies.
“Companies need to make Flink more accessible, secure, and easier to operationalise wherever their workloads are deployed,” said Shari Lava, Senior Research Director, AI and Automation, IDC. “Businesses should look for offerings that combine deep Flink expertise with capabilities like built-in connectors, automated operations, and strong customer success and support to accelerate time-to-value and simplify getting Flink applications into production. Solutions that provide a unified control plane across Flink and other components like Kafka go a long way in enabling more companies to harness real-time data and govern it effectively.”
Speed up migrations to WarpStream with reduced costs and disaster recovery capabilities
WarpStream’s BYOC deployment model is a popular option for customers with large-scale workloads and relaxed latency requirements who are looking to use their own virtual private cloud (VPC). Traditionally, it can be a challenging and manual process to migrate from open source Kafka to a BYOC model because it involves navigating different Kafka environments and building custom solutions that increase time, costs, and data quality issues. WarpStream Orbit makes it easier than ever to move existing workloads from open source Kafka, or any Kafka-compatible service, to WarpStream clusters. Customers can seamlessly migrate to WarpStream, optimise existing Kafka clusters with tiered storage to reduce costs, and set up disaster recovery for high throughput, relaxed latency workloads.