Valiton Technology Radar

Stand: März 2026

In unserem Technology Radar haben wir für uns relevante Technologien und Prozesse zusammengefasst und für uns auf Basis unserer täglichen Arbeit bewertet. Dinge, die wir in neuen Projekten möglichst nicht mehr einsetzen möchten (HOLD), sind hier genauso aufgeführt wie Elemente von deren Relevanz wir auch für die Zukunft überzeugt sind (ADOPT).

Languages & Frameworks
Platforms
Tools
Techniques
Adopt12345678910111213Trial52535455Assess59Hold67New or movedNo ChangeAdopt141516171819202122Trial56Assess606162Hold68New or movedNo ChangeAdopt23242526272829303132333435363738394041Trial5758Assess63Hold6970New or movedNo ChangeAdopt42434445464748495051TrialAssess646566HoldNew or movedNo Change

Languages & Frameworks

Adopt

  • 1. Astro

    Astro is a JavaScript framework for creating web applications. Astro creates small �islands� to make your page interactive without using JavaScript everywhere. That is, it eliminates unnecessary JavaScript. For islands you can use your favorite library such as React or Vue. For the rest Astro uses purely HTML. This means that if you know HTML, you can use Astro.

  • 2. Cube.js

    Cube.js is a platform for building analytical web applications that leverage an organization's existing database. It works with structured data sources, enabling powerful tools for transforming and pre-aggregating data. The platform is highly flexible, allowing organizations to easily integrate it into existing applications and platforms. We recommend leveraging Cube.js, as it provides a simple yet powerful way to build scalable analytical applications that can generate insights quickly and efficiently.

  • 3. FastAPI

    FastAPI is a high-performance Python web framework for building APIs quickly and efficiently. It offers a set of tools for creating scalable APIs that are easy to learn and use. FastAPI's out-of-the-box support for asynchronous programming makes it a great choice for building real-time applications. With FastAPI, building APIs that can handle high traffic loads is simple and efficient.

  • 4. Huggingface Transformers

    Huggingface Transformers is an open-source library for natural language processing (NLP). It offers a wide range of pre-trained models and tools for analyzing and processing text data, making it easy to generate insights and predictions. Huggingface Transformers is easy to integrate with existing applications and platforms, providing a simple and efficient way to build NLP applications.

  • 5. LangChain

    LangChain is an advanced framework that empowers the development of applications powered by Large Language Models (LLMs). It seamlessly supports renowned LLMs such as ChatGPT, Vertex, Cohere, aleph alpha, and many others. With LangChain, developers can leverage a variety of reusable components, including prompt templates, Chat Histories Memory, and tools for incorporating results from vector searches and external sources like Google, Wikipedia, and Wolfram into prompts. These versatile components and tools can be effortlessly chained together to orchestrate the LLM pipeline, all through intuitive and standardized interfaces.

  • 6. LangGraph

    LangGraph is a library for building GenAI applications that implement agent or multi-agent workflows. LangGraph originates from the same ecosystem as LangChain, but can be used independently. Nevertheless, LangGraph offers good integration and abstraction in communication with LLMs when used in conjunction with LangChain.

  • 7. Next.js

    Next.js is flexible JavaScript framework for creating scalable applications. It enables server-side rendering and provides an intuitive page-based routing system, making it easy to create complex, rich web experiences. Next.js is modular, allowing organizations to quickly and easily integrate custom modules, including APIs, libraries, and plugins. Its powerful development tools support popular databases, allowing for faster iteration and deployment. With Next.js, building dynamic, feature-rich applications is simple and efficient.

  • 8. PyTorch

    PyTorch is an open-source machine learning library for efficient computation and development of deep learning models. It is used for a wide range of tasks, from training models to producing complex neural networks. PyTorch's flexible architecture makes it easy to integrate with existing applications and platforms. It also has excellent support for GPUs, enabling organizations to take advantage of hardware acceleration for model training. We recommend leveraging PyTorch, as it provides a powerful, reliable, and efficient way to develop and train deep learning models.

  • 9. React

    React is a popular JavaScript library for building user interfaces. It provides a component-based architecture, making it easy to create reusable UI elements that can be used across projects. React also supports server-side rendering and has excellent performance and scalability, making it ideal for large-scale projects. We suggest using React, as it simplifies UI development and provides a flexible and scalable foundation for web applications.

  • 10. Streamlit

    Streamlit is python framework to build simple web apps and UIs with only few lines of code. We use it for rapid prototyping and the creation of non-production applications for PoCs and demonstrators.

  • 11. Symfony

    Symfony is a popular PHP web application framework used for building complex, high-performance web applications. It's been around for over a decade and has a strong community of developers supporting it. We've used it in several projects and find it reliable and easy to work with.

  • 12. Tailwind CSS

    Tailwind CSS follows the utility-first approach of Atomic Design Methodology. Unlike heavyweight frameworks like Bootstrap and Bulma, Tailwind CSS classes with built-in design system can be flexibly combined making it easy to create custom designs. Tailwind works with any JavaScript framework or even plain HTML, often reduces the amount of CSS required by a fraction and produces very small stylesheets through CSS purging (often < 10KB).

  • 13. Vue.js

    Vue.js is a progressive JavaScript framework for building user interfaces. It's lightweight and easy to integrate into existing projects. We've used it in several projects and found it to be a great choice for building dynamic, responsive user interfaces.

Trial

  • 52. Crew AI

    Build intelligent AI teams that work like human crews. CrewAI allows you to create collaborative AI agents with distinct roles and expertise that coordinate automatically to solve complex business problems and streamline operations.

  • 53. DeepEval

    Quality assurance and automated testing of LLM-based applications in CI/CD pipelines. DeepEval enables 'unit testing' of LLM outputs similar to Pytest. It provides metrics for RAG, agents and other LLM use cases.

  • 54. NiceGUI

    NiceGUI enables fast development of web interfaces directly in Python. It offers pre-built UI components (Vue Quasar) and TailwindCSS styling. This is a good alternative to Streamlit.

  • 55. Web Components Lit

    Building on top of the Web Components standards, Lit adds just what you need to be happy and productive: reactivity, declarative templates and a handful of thoughtful features to reduce boilerplate and make your job easier. Every Lit feature is carefully designed with web platform evolution in mind.

Assess

  • 59. Openobserve

    OpenObserve is a cloud native observability platform built specifically for logs, metrics, traces and analytics designed to work at petabyte scale.

Hold

  • 67. spacy

    Spacy is a free, open-source library for advanced natural language processing in Python. It's fast and efficient, with high accuracy rates. We use it extensively in our data projects, such as text classification and entity recognition. We put it on hold because spacy is not state of the art anymore . Consider using Huggingface Transformers.

Platforms

Adopt

  • 14. Apache Airflow

    Apache Airflow is an open-source platform used to programmatically author, schedule, and monitor workflows. We use it heavily in most data projects, such as ETL pipelines, as it offers great flexibility and extensibility.

  • 15. Apache Kafka

    Apache Kafka is an open-source distributed event streaming platform used for building fault-tolerant architectures for real-time streaming data. It can be used to collect, store, distribute, and analyze massive amounts of data from numerous sources. Kafka is highly configurable, offering robust APIs for both producers and consumers, as well as security and scalability support. Leveraging Kafka can help us to process and manage streaming data more effectively and efficiently.

  • 16. API Gateways

    API Gateways provide centralized control, visibility, and enhanced security for an organization's APIs. They enable a business to control who has access to its APIs, which operations a consumer can perform, and metrics like rate-limiting and throttling to prevent resources from being overloaded. Other features include traffic control, authorization, authentication, request validation, and analytics to help optimize performance. We would like to evaluate various API management and gateway solutions in order to identify the platform best suited to meet our needs

  • 17. AWS Organizations

    We use AWS Organizations to maintain and govern our AWS infrastructure and the one of our customers. It enables us to simplify plenty of everyday tasks and supports us in applying our security standards.

  • 18. AWS Sagemaker

    AWS Sagemaker is a cloud-based machine learning platform that provides a fully managed environment for training, deploying, and managing models. It enables our data scientists to quickly build, train, and deploy high-quality machine learning models without needing to manage the underlying infrastructure. SageMaker includes extensive APIs, algorithms, architectures, and tools to support every step of the machine learning process.

  • 19. dbt

    dbt is a command-line tool that enables data analysts and engineers to transform data in their warehouse more effectively. We use it mostly in combination with Snowflake to build high-quality data pipelines and models.

  • 20. Opensearch

    OpenSearch is a community-driven, open-source search and analytics suite used by developers to ingest, search, visualize, and analyze data. OpenSearch consists of a data store and search engine (OpenSearch), a visualization and user interface (OpenSearch Dashboards), and a server-side data collector (Data Prepper).

  • 21. Tideways

    Tideways is a PHP performance monitoring and profiling solution that we use to identify performance bottlenecks in our PHP applications.

  • 22. Weaviate

    Weaviate is an open-source vector database. It allows you to store data objects and vector embeddings from your favorite ML-models, and scale seamlessly into billions of data objects. A vector data base builds the backbone of a vector search or a generative search.

Trial

  • 56. Trino

    Fast distributed SQL query engine for big data analytics that helps you explore your data universe.

Assess

  • 60. Kserve

    KServe (previously KFServing) solves production model serving on Kubernetes. It delivers high-abstraction and performant interfaces for frameworks like Tensorflow, XGBoost, ScikitLearn, PyTorch, and ONNX.

  • 61. Lance/LanceDB

    Modern open-source lakehouse format for multimodal AI with integrated vector and full-text search. Lance is a file format specifically for AI/ML applications, claiming to have 100x faster random access than Parquet. LanceDB is the database built on top, combining vector search, full-text search, and SQL analytics in a unified interface, supporting multimodal data.

  • 62. qdrant

    qdrant is an open source vector database. It promises to be very performant and to scale very good with millions of vectors. It also provides pre-filtering, support for clients in many programming languages and an intuitive web-UI to explore the content.

Hold

  • 68. Snowflake

    Snowflake is a cloud-based data storage and analytics service that we use in many of our data projects. It's often used in combination with dbt, and we have found it to be a reliable and effective technology. We put it on hold because in our experience the scale cost ratio was not sufficient enough.

Tools

Adopt

  • 23. Ansible

    Ansible is an open source IT automation engine that automates provisioning, configuration management, application deployment, orchestration, and many other IT processes.

  • 24. Apache superset

    Apache Superset is an open-source software cloud-native application for data exploration and data visualization. It is capable of handling data at petabyte scale, through our experience, we have established that this technology is dependable and efficient

  • 25. ArgoCD

    ArgoCD is a tool for managing Kubernetes deployments using a GitOps approach, where application configurations are stored in a Git repository. It keeps applications in sync with their desired state and provides an easy way to track and visualize changes through a web interface or command line. With features like automated rollbacks and multi-cluster support, it helps simplify the process of managing complex Kubernetes environments.

  • 26. ClickHouse

    ClickHouse is an open-source columnar database management system (DBMS) designed for high-performance analytics and data processing. It is known for its exceptional speed and scalability, making it ideal for handling large volumes of data and performing real-time analytical queries.

  • 27. DVC

    Data Version Control is a system designed specifically for machine learning projects, enabling developers to securely track, protect, share and reproduce data sets, functions, pipelines and more throughout the project lifecycle. DVC reduces onboarding time and costs, making it worth a trial for its ability to streamline data version control.

  • 28. Gitlab

    Gitlab is one of our primary tools for source code management, CI/CD and package management. We have been using the Community Edition for years and continue to be impressed by its growing feature set.

  • 29. Grafana

    Grafana is a composable platform for monitoring and observability which we use in many of our projects to monitor our infrastructure. It has consistently demonstrated its reliability and effectiveness in our experience, making it a go-to tool for us.

  • 30. Grafana Loki

    Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus. It was designed to be very cost-effective and easy to operate. Rather than indexing the content of logs, it only indexes a set of labels for each log stream.

  • 31. Helm

    Helm is a package manager for Kubernetes and helps us manage our applications running in a Kubernetes cluster. Using Helm charts helps us to standardize workflows and reduces complexity.

  • 32. Karpenter

    Karpenter is an open-source node lifecycle management project built for Kubernetes. Adding Karpenter to a Kubernetes cluster can dramatically improve the efficiency and cost of running workloads on that cluster. The tool is perfectly connected with AWS and can react to AWS events such as EC2 Spot termination events so that it spins up automatically new instances up when a spot request gots canceled by AWS. This increases the stability of Spot workloads and save us money in the end because we can use Spot instances for more use-cases

  • 33. Kubernetes

    Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. We have found it to be a reliable and effective technology for managing our applications.

  • 34. OpenTofu

    Terraform changed its license terms some time ago. Even if this does not affect us yet, OpenTofu, a fork supported by the LinuxFoundation, has emerged. Since we use Terraform intensively, we would like to understand whether the fork can be the better version to use in the long term

  • 35. prodigy

    Prodigy is an extensible annotation tool to create training and evaluation data for machine learning models.

  • 36. Prometheus

    Prometheus is a software application used for event monitoring and alerting. It records real-time metrics in a time series database (allowing for high dimensionality) built using a HTTP pull model, with flexible queries and real-time alerting.

  • 37. RabbitMQ

    RabbitMQ is a message broker that enables applications to send and receive messages using a variety of messaging protocols. It supports multiple messaging patterns, including point-to-point, publish/subscribe, and request/reply. RabbitMQ is highly available and can scale to handle large amounts of messaging traffic, making it a reliable and efficient choice for message-based architectures.

  • 38. Renovate

    We use Renovate (https://renovatebot.com/) to update our project dependencies using pull requests. It automatically applies all applicable patches and security updates, while filtering out risky changes. This helps keeping track of outdated dependencies and minimises the work needed for updates.

  • 39. Sonarqube

    SonarQube is an open-source platform for continuous inspection of code quality. It provides metrics, code coverage, and code duplication analysis, as well as automated code reviews, which help us maintain code quality and reduce technical debt.

  • 40. Valkey

    Valkey is an open source high-performance key/value database for caching, message queues, and in-memory database. Valkey can run standalone or in a cluster for replication and high availability. It is a drop-in replacement for Redis. AWS is offering Valkey as managed service.

  • 41. Vite

    Vite is a new generation build tool for scaffolding and building modern javascript projects, with a rapidly growing popularity. It is of framework-agnostic. Vite takes advantage of the native ES modules support and esbuild for pre-bundling dependencies (on development and rollup on production). As a result, it can drastically reduce the build time compared to webpack, rollup and parcel. It also offers out-of-the-box support for TypeScript. Vite has replaced webpack in the vue-cli.

Trial

  • 57. Kargo

    Kargo is an open-source continuous promotion tool for Kubernetes that bridges the gap between CI/CD and GitOps. While tools like ArgoCD excel at keeping a cluster in sync with a Git repository, they don't address the question of how a new application version travels across stages (e.g., dev → staging → production). Kargo fills exactly this gap: it automates multi-stage application promotion by applying GitOps principles

  • 58. Open Table Formats

    Open table formats are data storage formats that are open-source and designed for efficient querying, interoperability, and large-scale analytics. Examples include Apache Iceberg, Delta Lake, and Apache Hudi, which support features like ACID transactions, schema evolution, and time travel. These formats help organizations manage big data efficiently across different processing engines like Spark, Trino, and Flink.

Assess

  • 63. Strix

    Strix are autonomous AI agents that act just like real hackers - they run your code dynamically, find vulnerabilities, and validate them through actual proof-of-concepts. Built for developers and security teams who need fast, accurate security testing without the overhead of manual pentesting or the false positives of static analysis tools.

Hold

  • 69. Harbor

    Harbor is a cloud native registry project that enhances the open source Docker Distribution by adding security, identity, and management functionalities. It enables us to store, sign, and scan content for our projects that are not running in the cloud. We put it on hold because we are now using container and package registries in a project context. For the security scanning feature with Claire we switched to Kubeclarity.

  • 70. Snowplow

    Snowplow Analytics is an open-source enterprise event-level analytics platform that enables data collection from multiple platforms for advanced data analytics. At the moment we see Snowplow as the most suitable building block for data collection, in order to be able to detach from large paid analytics products. The license has changed and therefore we will not consider it for any new projects.

Techniques

Adopt

  • 42. Agent2Agent Protokoll

    The Agent-to-Agent Protocol (A2A) is an open standard that enables AI agents to communicate and collaborate across different platforms and frameworks. It allows agents to discover each other's capabilities, negotiate interaction methods, and securely collaborate on tasks without exposing their internal state or tools. This promotes interoperability and enables the creation of more powerful and interconnected AI ecosystems.

  • 43. Clean Code

    Clean Code is a set of principles and practices for writing code that is easy to understand, maintain, and extend. It includes naming conventions, code formatting, and the use of comments and documentation. We follow the principles of Clean Code to ensure that our code is readable, maintainable, and of high quality.

  • 44. Code Assistants

    The landscape of AI-based code assistants has matured significantly. There is now a broad range of tools available — from commercial solutions like GitHub Copilot, JetBrains AI Assistant, and Cursor to open-source and self-hostable alternatives such as Continue.dev, Tabby, and Opencode. These tools have moved well beyond simple autocomplete — they now support multi-file edits, test generation, refactoring, and autonomous task execution. The key questions are no longer whether they add value, but which tool best fits our workflow, data privacy requirements, and infrastructure constraints.

  • 45. Functional programming

    Functional Programming is a programming paradigm that emphasizes the use of pure functions, immutable data, and declarative style. We use functional programming to write code that is modular, testable, and easier to reason about. It also helps us to write code that is more resilient to change and easier to maintain.

  • 46. GitOps

    GitOps ensures that a system's cloud infrastructure is immediately reproducible based on the state of a Git repository. Pull requests modify the state of the Git repository. Once approved and merged, the pull requests will automatically reconfigure and sync the live infrastructure to the state of the repository.

  • 47. LLM Observability

    LLM observability provides tools, techniques, and methodologies to help teams manage and understand LLM application and language model performance, detect drifts or biases, and resolve issues before they have significant impact on the business or end-user experience.

  • 48. Model Context Protocol (MCP)

    MCP stands for Model Context Protocol and is a specification, a new standard for how models/LLMs can access resources, tools, or prompts to use them directly. This is particularly interesting and important for agentic workflows, autonomous agents, or other AI services. We operate MCP Server by ourself and use as a Framework among other things FastMCP

  • 49. OpenAPI

    OpenAPI (formerly known as Swagger) is an open standard for describing APIs. It provides a way to describe the structure and functionality of APIs, which makes it easier to develop, test, and maintain them. We use OpenAPI to document our APIs, which makes it easier for developers to understand how to use them and to build applications that consume them.

  • 50. Performance Testing Frontend

    Performance testing of frontend applications is a technique that helps us to measure the speed and stability of our applications. It involves simulating real-world scenarios and measuring the performance of our applications under different loads. We use performance testing to ensure that our applications are fast, reliable, and can handle the traffic they receive.

  • 51. Pipelines as code

    The Pipeline as code technique advocates that the deployment pipeline configuration for building, testing, and deploying our applications or infrastructure should be treated as code.They should be placed under source control and best modularized into reusable components.

Trial

    Assess

    • 64. Agent Skills

      Agent Skills are folders containing instructions, scripts, and resources that AI agents can dynamically load. The format was developed by Anthropic and is supported by leading AI tools (Claude, GitHub Copilot, etc.). Skills enable packaging domain expertise or repeatable workflows.

    • 65. Agentic Development Environments

      The next stage in the evolution of modern development tools is so-called agentic development environments—development environments with autonomous coding agents. These agents are capable of independently planning, implementing, reviewing, testing, and, if necessary, correcting tasks. Execution typically takes place in isolated environments (e.g., Git worktrees), enabling parallel and secure development. Examples of such development environments include Aperant (formerly Auto Claude), Conductor, Google Antigravity, and JetBrains Air.

    • 66. On-Device AI

      Large language models (LLMs), computer vision (CV) and audio models can now run on edge devices such as smartphones and tablets. In addition to on-device inference, it is possible to fine-tune the models on the device. With no need to send information to a data center, this brings great benefits like more privacy, low latency and reduced costs.

    Hold