With the new version, users will get access to a completely new Otomi Web Console (the UI of Otomi). This new console represents a more intuitive UI with improved navigation and self-service for platform engineers and developers. In Otomi the console is an instrument to get access to predefined self-service tasks to generate validated configuration code in an easy way to prevent mis-configuration. Otomi generates and stores code for everything done in the platform.
New Build feature
Otomi originally started with easy self-service to configure advanced network configurations and provide a full observability stack. With the introduction of Workloads, Otomi now generates all the configuration for deploying application workloads the GitOps way, without a user having to write any YAML. With the introduction of the Build feature, users can now build images directly from application source code. A Dockerfile isn’t even needed anymore. This makes Otomi a true self-hosted alternative for services like Heroku, but in this case completely Kubernetes native and compliant.
Cloud Bill Shock is a term used more and more frequently. That is why we have integrated cost monitoring. This allows platform administrators to cost alert quotas to provide visibility of the cost implication per team. Next to setting quota alerts, platform teams can also get insights into cost utilization of the complete platform (kubernetes clusters).
Improved Workloads feature
With the improved workloads feature it now becomes even easier for developers to deploy applications according to security and compliance standards which, in turn, makes the software delivery process more efficient and secure. This improvement brings us another step closer to our mission of giving time back to developers and to help save time, money and scarce resources.
The Kubernetes release cycle is going extremely fast and all applications running on Kubernetes need to keep up. This brings a heavy burden on platform teams. They constantly need to upgrade and test all the tools they run on Kubernetes. The integration framework and migration tools used in the development process of Otomi help us to do the lifecycle management more efficiently and faster. Yes, we would like to see it going even faster, but we are still dependent on the community behind all the open source projects we integrate. In the end the biggest advantage of using Otomi is that you don’t need to do the lifecycle management yourself. In the new version of Otomi we are bringing upgrades for Harbor, Gitea, Istio and cert-manager, paving the way to support Kubernetes version 1.25 by the end of May.
Introducing Otomi Cloud
This week we also launched otomi.cloud. Otomi Cloud is a cloud service to generate a Community Edition License for Otomi. The backend however is ready to provide more advanced features like multi-cluster operations and a full Heroku-like cloud service in the near future. So keep an eye open and be blown away with the features we are going to come up with soon.