According to a recent “State of AI” report by Appen’s artificial intelligence firm, 55 percent of companies have accelerated AI deployment in 2020 and 67 percent expect to increase further this year. the pace. As the introduction of AI increases, overhead costs will become an important consideration and there is a risk that these expenditures could increase like a snowball if organizations do not plan ahead.
This trend has been confirmed by IDC AI- and Ritu Jyoti, vice president of automation research, said the pandemic “put AI at the top of the corporate agenda, allowing for business flexibility and relevance. We have now entered the AI-supported area of work and decision-making in all functional areas of the company. The responsible creation and use of AI solutions capable of rapid detection, prediction, response and adaptation is an important business necessity, ”the DCD portal quoted the expert as saying. Jyoti’s comments were published alongside IDC’s market estimates. According to them, the global AI market will grow by 15.2 percent this year to $ 341.8 billion and is expected to exceed $ 500 billion by 2024.
In terms of infrastructure, businesses need to adapt and be flexible. to be. This need for flexibility is making cloud, especially hybrid cloud, the foundation of AI as the demand for significant amounts of data is growing. Using the hybrid cloud, companies can meet AI’s technology needs at a cost level appropriate to their business and workload.
Infrastructure-as-a-Service (IaaS) enables organizations to sacrifice performance use, develop and implement their AI program without However, there are a number of infrastructural elements that organizations need to keep in mind when evaluating potential IaaS providers.
they must have access to computing resources, including CPUs and GPUs, to take full advantage of the capabilities offered by AI. Machine learning algorithms need speed and performance to perform a large number of calculations. While a CPU-based environment can handle core AI workloads, deep learning requires more large data sets and the ability to deploy a scalable neural network algorithm.
CPU-based computing may not be able to meet these goals, and GPUs may be a better choice. The higher performance provided by GPUs can significantly speed up in-depth learning compared to CPUs. However, this speed comes at a higher cost, and in some cases, switching from CPU to GPU may not be cost-effective. It is important to find the right balance for the tasks you need
The ability to scale storage as the amount of data increases is essential for many businesses. Organizations need to determine what type of storage they need and consider a number of factors. Among other things, they need to look at what level of artificial intelligence is worth or needs to use, and whether they need to make real-time decisions. For example, a FinTech company that uses AI systems for real-time trading decisions may need fast all-flash storage technology, while other companies may be better suited for higher-capacity but less fast storage.
For businesses they need to figure out how much data their AI applications will generate as they make better decisions when faced with more data. Databases are growing over time, so storage capacities need to be monitored and expanded accordingly
Networking is the backbone of AI infrastructure. another key element. Good, fast, and reliable networks are essential to maximize results. In-depth learning algorithms are highly dependent on communication, so networks need to keep pace with the demands of expanding AI efforts. Scalability is paramount, with AI requiring a high-bandwidth, low-latency network. It is also important that the service packaging and technology stack are consistent across regions
Artificial intelligence can handle sensitive data such as patient records, financial information and personal data. It is therefore essential that the infrastructure is protected end-to-end with state-of-the-art technology. Data breaches would be a disaster for any organization, but in the case of AI, the influx of bad data can lead to erroneous conclusions, which can lead to erroneous decisions.
As AI models become more complex, they become more expensive to run, so it is critical that the infrastructure provide extra performance to keep costs under control. As companies increasingly use AI, they are placing an increasing strain on network, server, and storage infrastructures. Businesses need to choose and find IaaS providers that offer cost-effective dedicated servers to increase performance and allow them to continue investing in artificial intelligence without increasing their budgets
. Hardware, software, tests, curiosities and colorful news from the world of IT by clicking here