This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
Part of what Hazelcast® does is store data, and there are trade-offs surrounding the various options for data storage. In this webinar we’ll cover the possible options, and how these impact speed, space and cost. In this webinar, we’ll start from the basics, and progress incrementally through the various options, pointing out the pros and cons for each.
There is no right answer that suits all, which is why there is a default. But if you’re using the default without awareness of the alternatives, you might find that the alternatives are appealing. A sensibly coded data model can halve the amount of hardware you need to store the same data, or can nearly double the data transmission speed. Sometimes you can even get both.
Neil is a solution architect for Hazelcast®, the world’s leading open source in-memory data grid.
In more than 25 years of work in IT, Neil has designed, developed and debugged a number of software systems for companies large and small.