Latest Post

Future-proof Your SysAdmin Career With LPI Certifications CompTIA A+ Certification will help you to secure your career.

Many indicators point to artificial intelligence (AI), as the next battlefield for the top three cloud providers.
Google highlighted its recent AI efforts last week, including the launch of Google Home. This device is similar to Amazon Echo and taps the Google search engine, and has its own personal assistant. Google’s new Android-based Pixel is the company’s first smartphone to include the personal assistant.
Microsoft’s announcements of AI were closely followed by Google’s. Microsoft announced the AI advancements in Azure cloud at its Ignite conference last October. This included the deployment of field-programmable gates arrays (FPGAs), in every Azure node, to allow it to perform the tasks of a supercomputer. This will accelerate its machine learning and AI framework.
Microsoft Research networking expert Doug Burger stated that every Azure node has FPGAs and GPUs, allowing for “ExaScale” throughput. This means that it can run 1 billion operations per second. This means Azure has 10 times the AI capability as the world’s biggest supercomputer, Burger claimed. He also noted that “a single FPGA board turbo-charges Azure, allowing it recognize images significantly faster.”
He said that Azure now supports network speeds up to 25Gbps. This is faster than any other cloud provider.
Google plans to build out Azure to power its AI-based capabilities, just as Microsoft is doing the same. According to Karl Freund, a senior analyst from Moor Insights and Technology, Google has the largest AI scientist pool, while Microsoft and China’s Baidu search engine and cloud come in close second. Microsoft recently announced the creation of an AI group with 5,000 engineers.
In a Forbes blog post, Freund explained that Microsoft’s stealth deployments of FPGAs in Azure in the past few years are a huge deal and will likely be a strategy that other large cloud providers will consider when they seek to improve their machine learning capabilities.
Freund stated in an interview that he was aware that Microsoft had been interested in FPGAs for five years before Microsoft Research quietly revealed “Project Catapult,” which outlined a five-year plan to deploy the accelerators throughout Azure. Microsoft’s work with FPGAs was first revealed by Microsoft Research two years ago. The paper described how the FPGA fabric was deployed on 1,632 servers in order to speed up the Bing search engine.
Freund said that it was still surprising that Microsoft moved forward with the deployment. Freund also highlighted how Microsoft’s decision to deploy FPGAs contrasts to how Google is building AI in its cloud using non-programmable ASICs. The TPU is the fixed function chip of Google. It is based on the TensorFlow machine-learning libraries and graph to process complex mathematical calculations. Google announced in May that the TPUs were running in its cloud since more than a decade ago.
The main difference between Microsoft’s and Google’s approaches to powering cloud computing with AI-based computational power and network power is that FPGAs can be programmed, while Google’s TPUs cannot, as they are ASIC-based.
“Microsoft will be more responsive. Because FPGAs are field-programmable, they can reprogram them once a month. This means they can change the gates without having to replace the chip. A TPU cannot be reprogrammed because it can’t change silicon. Freund stated. He explained that Google will need to replace the processors in its cloud nodes every month.
Google has an advantage over Microsoft because its TPUs can be substantiated.