Programmable chips turning azure into a supercomputing powerhouse _ ars technica internet marketing services india

Microsoft is embarking on a major upgrade of its Azure systems. Online marketing services india New hardware the company is installing in its 34 datacenters around the world still contains the mix of processors, RAM, storage, and networking hardware that you’ll find in any cloud system, but to these Microsoft is adding something new: field programmable gate arrays (FPGAs), highly configurable processors that can be rewired using software in order to provide hardware accelerated implementations of software algorithms.

The company first investigated using FPGAs to accelerate the Bing search engine. Internet marketing services in delhi In “Project Catapult,” Microsoft added off-the-shelf FPGAs on PCIe cards from Altera (now owned by Intel) to some Bing servers and programmed those FPGAs to perform parts of the Bing ranking algorithm in hardware. Addon domain


adalah The result was a 40-fold speed-up compared to a software implementation running on a regular CPU.

A common next step after achieving success with an FPGA is to then create an application specific integrated circuit (ASIC) to make a dedicated, hardcoded equivalent to the FPGA. Internet marketing company india This is what Microsoft did with the Holographic Processing Unit in its HoloLens headset, for example, because the ASIC has greatly reduced power consumption and size. Internet marketing job description But the Bing team stuck with FPGAs because their algorithms change dozens of times a year. Web wikipedia An ASIC would take many months to produce, meaning that by the time it arrived, it would already be obsolete.

With this pilot program successful, the company then looked at ways to use FPGAs more widely across not just Bing but Azure and Office 365 as well. Best internet marketing company in india Azure’s acute problem wasn’t search engine algorithms; Azure CTO Mark Russinovich told us it was network traffic.

Azure runs a vast number of virtual machines on a substantial number of physical servers, using a close relative of Microsoft’s Hyper-V hypervisor to manage the virtualization of the physical hardware. Online bibliography generator Virtual machines have one or more virtual network adaptors through which they send and receive network traffic. Internet marketing wiki The hypervisor handles the physical network adaptors that are actually connected to Azure’s networking infrastructure. Internet marketing in delhi Moving traffic to and from the virtual adaptors, while simultaneously applying rules to handle load-balancing, traffic routing, and so on, incurs a CPU cost.

This cost is negligible at low data rates, but when pushing multiple gigabits of traffic per second to a VM, Microsoft was finding that the CPU burden was considerable. Internet marketing africa company Entire processor cores had to be dedicated to this network workload, meaning that they couldn’t be used to run VMs.

The Bing team didn’t have this network use case, so the way they used FPGAs wasn’t immediately suitable. What is search engine optimization ppt The solution that Microsoft came up with to handle both Bing and Azure was to add a second connection to the FPGAs. Online marketing specialist resume In addition to their PCIe connection to each hardware server, the FPGA boards were also plumbed directly into the Azure network, enabling them to send and receive network traffic without needing to pass it through the host system’s network interface.

That PCIe interface is shared with the virtual machines directly, giving the virtual machines the power to send and receive network traffic without having to bounce it through the host first. Website adalah The result? Azure virtual machines can push 25 gigabits per second of network traffic, with a latency of 25-50 microseconds. Web indonesia That’s a tenfold latency improvement and can be done without demanding any host CPU resources.

Technically, something similar could be achieved with a suitable network card; higher-end network cards can similarly be shared and directly assigned to virtual machines, allowing the host to be bypassed. What is local internet marketing But this process is usually subject to limitations (for example, a given card may only be assignable to 4 VMs simultaneously), and it doesn’t offer the same flexibility. Internet marketing specialist job description When programming the FPGAs for Azure’s networking, Microsoft can build the load balancing and other rules directly into the FPGA. Domain internet adalah With a shared network card, these rules would still have to be handled on the host processor within a device driver.

This solution of network-connected FPGAs works as well for Azure as it does for Bing, and Microsoft is now rolling it out to its data centers. Internet marketing adalah Bing can use the FPGAs for workloads such as ranking, feature extraction from photos, and machine learning. Search engine optimization techniques pdf Azure can use them for accelerated networking.

Currently, the FPGA networking is in preview; once it’s widespread enough across Microsoft’s datacenters, it will become standard, with the long-term goal being to use FPGA networking for every Azure virtual machine.

Networking is the first workload in Azure, but it’s not going to be the only one. Online bibliography In principle, Microsoft could offer a menu of FPGA-accelerated algorithms (pattern matching, machine learning, and certain kinds of large-scale number crunching would all be good candidates) that virtual machine users could opt into, and longer term custom programming of the FPGAs could be an option.

Microsoft gave a demonstration at its Ignite conference this week of just what this power could be used for. Online marketing definition pdf Distinguished engineer Doug Burger, who had led the Project Catapult work, demonstrated a machine translation of all 3 billion words of English Wikipedia on thousands of FPGAs simultaneously, crunching through the entire set of data in just a tenth of a second. What is google search engine optimization The total processing power of all those FPGAs together was estimated at about 1 exa-operations—one billion billion, 10 18 operations—per second, working on an artificial intelligence-style workload.

While not a primary target for the company, the scale of cloud compute power—combined with the increasingly high performance interconnects that FPGAs and other technology enable—mean that cloud-scale systems are in time likely to be competitive with and eventually surpass the supercomputers used for high-performance computing workloads. What is online marketing what are its advantages and disadvantages With this kind of processing power on tap, it won’t be too long before Azure becomes a collection of supercomputers available to anyone to rent by the hour.