News
Stay informed and up-to-date with the latest and most relevant updates.
Introducing Bo T, powered by ai axpress
May

Introducing Bo T, your go-to AI-based Cybersecurity Advisor at DI CS! Bo is a versatile companion whose main aim is to guide you through your daily cybersecurity touchpoints with ease and expertise. Bo's passion and specialty lie in Cybersecurity, but he brings a unique twist to his role - he's not just a specialist, but also a dedicated customer service expert at DI. So, whenever you need assistance navigating the intricate world of cybersecurity, remember Bo is here to help!

OpenAI's Webcast Updates
April

We're excited to reveal a major update to our OpenAI Webcast. This comprehensive course, offering an introduction into OpenAI and Azure OpenAI Studio, now includes new features to enrich your learning experience. As part of this upgrade, participants will gain exclusive access to GPT-4 model, GPT-4Vision model, and the ability to add their own data service via Playground Premium for a duration of two weeks.Additionally, the course will provide insightful guidance on the practical applications and effective usage of these models and services. Register now to secure your spot and be a part of this enriching journey with OpenAI!

Introducing GPT-4o
May

GPT-4o is the latest most capable Azure OpenAI model.

Building on the success of its predecessors, GPT-4o takes a significant leap forward by integrating both text and images as inputs. This new approach enhances the model's accuracy and responsiveness, thereby improving overall human-computer interactions. GPT-4o maintains the same level of proficiency as GPT-4 Turbo in English text and coding tasks. However, where it truly shines is in its performance in non-English languages and vision tasks. At the moment the new model is available to users with Developer Packages upon request. Updates regarding GPT- 4o availability for Playground Packages are coming soon!

New functionality: Tracing
May

Azure ML has a new preview functionality called tracing. Tracing can make it easier to debug LLM application and view performance of LLM app. Traces record specific events or the state of an application during execution. It can include data about function calls, variable values, system events and more. Traces help break down an application’s components into discrete inputs and outputs, which is crucial for debugging and understanding an application. The tracing feature will support LLM, langchain and autogen

Mistral Large becomes part of our API Only
April

We are thrilled to announce that Mistral Large can now be accessed through API Only! This state-of-the-art LLM, developed by Mistral AI, is equipped with exceptional capabilities for complex multilingual reasoning tasks. Whether it's text understanding, transformation, or code generation, Mistral Large sets new standards in language processing. Mistral Large also comes with the default Azure Content filters that prevent harmful response. The content filtering system is integrated and enabled by default for all customers. Find out how to order API Only here.

ai attack is now offering AWS
April

ai attack is thrilled to announce the launch of its new AWS-based API and Custom Development offerings, designed to deliver tailored AI solutions. With API Package you get access to a range of powerful AI models. Meanwhile with the Custom Development account, customers can flexibly integrate AI services tailored to their specific needs. 
Both AWS offerings are now accessible at SDC.

MLOps Guide
May

We have fantastic news to share! We launched the ai attack MLOps Guide, a comprehensive solution designed to empower teams in their machine learning endeavors. This robust MLOps template not only streamlines the journey from Proof-of-Concept to a fully matured product but also significantly boosts efficiency and collaborative efforts within teams. It is meticulously tailored to enable data scientists to refine and enhance the quality of their AI models, ensuring a seamless development process and exceptional end results. 

Check latency of LLM models on Azure and AWS
May

As latency is an important factor and a big challenge with LLMs, we took the initiative to set up a website where you can track and analyze the latency of ai attack LLMs, in Azure OpenAI, Azure Machine Learning and in AWS Bedrock Daily live updates! The tool generates synthetic requests using random words according to the number of context tokens in the shape profile requested.