404

Page Not Found

The page you're looking for doesn't exist or has been moved.

Back to Home

IPv4/IPv6 Certified

Full dual-stack support for modern networking

Read more

NVMe SSD

Ultra-fast storage for maximum performance

Read more

Domain Registration

Secure your perfect domain name today

Read more

Newsticker

Aktuelle Informationen und Updates

Aktuell keine News verfügbar.

Newsticker

Aktuelle Informationen und Updates

New Monitoring Infrastructure Live

NEU
We have expanded and optimized our monitoring infrastructure to ensure maximum availability and fast response times. By successfully implementing a high-availability system for Grafana, we have established a robust failover setup that prevents any interruptions even in the event of server failures. The configuration of data sources, dashboards, and alerting systems has been carefully aligned, enabling customers to gain comprehensive insights into their system performance. Additionally, monitoring exporters such as node_exporter and dcgm-exporter have been deployed on high-performance systems equipped with NVIDIA GPUs. These enable precise measurements of hardware and software performance, even under complex workloads. The integration of Ceph and NVIDIA-DCGM dashboards ensures transparent monitoring of critical system components. Our customers benefit from a stable and reliable monitoring solution that enables immediate alerting in case of anomalies. Automated notifications via Telegram allow potential issues to be detected and resolved quickly, further enhancing operational security. With this improvement, we reaffirm our commitment to the highest service quality and provide our customers with a future-proof foundation for their IT infrastructure.

AI Infrastructure Further Optimized – New: Qwen3-VL-30B-AWQ Available

NEU
Our cloud infrastructure has received a significant performance boost: we have strategically expanded GPU capacity and successfully integrated the newer Qwen3-VL-30B-AWQ model into our LocalAI-Bridge. This powerful multimodal model is now accessible to customers via the customer portal—ideal for demanding use cases such as document analysis or visual question-answering systems. Additionally, we have improved the routing system for model requests, ensuring that custom alias names are now correctly resolved and visible in the API response. This enhances predictability and integration depth in automated workflows. The overall stability of the AI backend has been further increased through targeted preloading and optimized concurrency configurations. All changes are running in the background without downtime—your applications immediately benefit from greater model diversity, shorter load times, and more precise response prioritization. We continue to think ahead: your innovation, our infrastructure.

New Automations for More Accurate Document Processing

NEU
Our development team has recently made comprehensive improvements to the document-matching algorithm in resys2.0 to optimize the assignment of transactions to invoices and receipts. By implementing an intelligent RAG chat system and integrating additional search strategies based on Paperless and NGX APIs, we have significantly increased the accuracy of document matching. This results in fewer misassignments and enables customers to achieve nearly error-free accounting automation. Another focus was improving the log output structure, allowing historical data to be completely re-matched if needed. This ensures a consistent data foundation and guarantees correct assignments even when there are changes in document layout or payment flows. Our new log structure provides precise insights into the matching process and enhances traceability. Additionally, internal PDF generation for missing invoices has been optimized, enabling valid PDFs to be automatically created and transmitted to the Paperless system even in the absence of original documents. This ensures seamless integration and increases the reliability of the entire processing chain. These technical advancements underscore our commitment to continuously advancing the automation of financial processes—driving greater efficiency, accuracy, and reliability in daily accounting.

Expanded Infrastructure for Maximum Performance and Stability

In the past 24 hours, we have further expanded our cloud infrastructure to enhance performance and stability for our customers. This includes the installation of 36 new SSDs in the storage cluster and the expansion of GPU capacity to support compute-intensive workloads. These measures ensure that applications respond more quickly and complex processes such as machine learning or large-scale data analysis run even more efficiently. At the same time, the storage cluster has been expanded to 200 TB to meet growing demands for storage space and data processing. In addition, new switches have been installed to enable 10-GbE connectivity, significantly boosting network performance and minimizing latency during data transfers. These technical improvements are part of our ongoing infrastructure expansion to ensure maximum availability and performance even with increasing data volumes. Our monitoring systems continuously oversee the entire infrastructure in real time—from CPU utilization to network capacity. The new hardware is immediately put into operation through automated testing and performance optimizations, allowing customers to benefit from immediate improvements. We believe that continuous innovation is the foundation for long-term success and therefore consistently focus on technical improvements that directly benefit our customers.

GPU Capacity Significantly Expanded for AI Workloads

Our infrastructure team has successfully expanded computing capacity for modern AI applications: new Blackwell-based GPU servers are now operational, offering native NVFP4 support for efficient inference. This expansion enables significantly faster model execution—particularly for large language models such as Qwen3-Coder-Next—while reducing power consumption and optimizing resource utilization. Customers are already leveraging this performance for highly dynamic cloud deployments, LocalAI bridges, and vLLM-based tool-calling workflows that seamlessly integrate into existing architectures. Performance metrics demonstrate substantial improvements in latency and throughput, providing a clear competitive advantage for businesses reliant on fast, reliable AI infrastructure.