Showing posts with label Nvidia. Show all posts
Showing posts with label Nvidia. Show all posts

Wednesday, April 5, 2017

Apple to focus on chip design in-house to reduce costs; to stop sourcing chips from Imagination

Apple to focus on chip design in-house to reduce costs; to stop sourcing chips from Imagination

Image: Reuters
Apple Inc’s decision to stop licensing graphics chips from Imagination Technologies Group Plc is the clearest example yet of the iPhone maker’s determination to take greater control of the core technologies in its products – both to guard its hefty margins and to position it for future innovations, especially in so-called augmented reality.
The strategy, analysts say, has already reduced Apple’s dependence on critical outside suppliers like ARM Holdings Plc, now owned by SoftBank Group Corp.
Apple once relied heavily on ARM to design the main processor for the iPhone, but it now licenses only the basic ARM architecture and designs most of the chip itself.
More recently, when Apple bought the headphone company Beats Electronics, part of a $3 billion deal in 2014, it ripped out the existing, off-the-shelf communications chips and replaced them with its own custom-designed W1 Bluetooth chip.
“Apple clearly got rid of all the conventional suppliers and replaced about five chips with one,” said Jim Morrison, vice president of TechInsights, a firm that examines the chips inside electronics devices.
“Today we do much more in-house development of fundamental technologies than we used to,” Apple Chief Financial Officer Luca Maestri said at a February conference. “Think of the work we do on processors or sensors. We can push the envelope on innovation. We have better control over timing, over cost and over quality.”
Most vendors of consumer electronics products rely on outside suppliers for chip design and development, primarily because it is extremely expensive. That has created huge opportunities for companies like ARM, Qualcomm Inc and Nvidia Corp, which have developed core technologies for processing, communications and graphics that are used by scores of vendors.
Now, though, Apple is so big that it can economically create its own designs, or license small pieces of others’ work and build on it. As with ARM and Qualcomm, the actual manufacturing of the chips is still contracted out to a semiconductor foundry, such as those run by Samsung Electronics and Taiwan Semiconductor Manufacturing Co Ltd.
MOVE FAST, SAVE MONEY
Bringing more of the design work in-house cuts complexity, people familiar with the processes say. Instead of managing one or more design teams and then a fabricator, Apple has only to manage the fabricator.
It may also help the company move faster – and save money – as it focuses on new technologies such as virtual and augmented reality. Apple CEO Tim Cook has indicated that Apple plans to integrate augmented reality into its products, which makes 3-D sensors and graphics chips like Imagination’s especially important.
Even before formally cutting off Imagination, Apple had given hints that it was preparing to design its own graphics processors. Specifically, it introduced a piece of its own code called Metal for app developers. App developers use Metal to make their apps talk to the graphics chip on the iPhone.
By putting a piece of Apple-designed code between app developers and the phone’s chip, Apple has made it possible to swap out the chip without interrupting how the developers work. That could also make it easier to bridge the gap for developers between the graphics chips on Apple’s phones and its desktop computers, which currently require some separate coding.
“By promoting Metal instead of relying on other existing standards, Apple is not only able to control what graphics chip functionality is exposed at its own pace, but also blur the line for developers between coding for desktop and mobile GPUs,” said Pius Uzamere, the founder of a virtual reality startup called Ether.
Taking control of the iPhone’s chips can also help Apple keep costs down, which is especially important as it gears up for a feature-laden new iPhone this fall. Timothy Arcuri of Cowen & Co said in a research note that he thinks the curved screens expected on the new phone could add as much as $50 in cost, for example.
Shebly Seyrafi, an analyst at FBN Securities, estimates that the average price of an iPhone increased only 1 percent to $695 last quarter, while costs increased 8 percent to $420, resulting in an iPhone gross margin of 39.6 percent. That is down from the 44 percent average gross margin for iPhones in 2015, according to Seyrafi’s estimates.
Apple spends only $75 million a year on licensing fees for Imagination’s chips. But licensing fees to chip designers, taken together, are a significant cost for the iPhone. Apple recently sued Qualcomm for $1 billion over licensing terms for its communications chips – which Apple would have trouble designing in-house because of patent issues.
Reuters
Publish date: April 5, 2017 11:21 am| Modified date: April 5, 2017 11:30 am

Friday, January 6, 2017

CES 2017: Nvidia launched Shield TV with Google Assistant and Nvidia Spot AI Mic

CES 2017: Nvidia launched Shield TV with Google Assistant and Nvidia Spot AI Mic

Image Credit: Nvidia
By 
Nvidia has launched a new Nvidia Shield TV at CES 2017 in Las Vegas. The company claims that new Shield TV will deliver unmatched performance in providing experiences like gaming, streaming and AI integration. The company has integrated Google Assistant for TV in addition to bringing a new design to Shield TV. Nvidia has opened Shield TV for pre-order and will start shipping to United States, Canada and selected regions in Europe later this month.
The device is priced at $199.99, and it will now ship with Gaming controller and remote, and you no longer need to buy them separately. Nvidia Shield Pro will also be available later this month with the controller, remote, headphone jack and 500GB storage. One interesting thing is that the company will ship a separate version of Shield TV with custom software to China later this year.
Image Credit: Nvidia
Image Credit: Nvidia
Shield adds support to 4K HDR streaming while providing three times the performance to what other media streaming devices provide in the market. It will provide support to video streaming apps from Netflix, YouTube, Google Play Movies and VUDU. YouTube TV app is set to come to the new Shield devices in coming months. Jen-Hsun Huang, founder and CEO of Nvidia added, “NVIDIA’s rich heritage in visual computing and deep learning has enabled us to create this revolutionary device.”
Shield TV is not limited to being a media streaming devices, and the company highlighted that the Shield Library of games has expanded to thousands of titles along with the ability to stream Ubisoft games like Watch Dogs 2, Assassin’s Creed Syndicate, For Honor and others. New Ubisoft games will be available for Shield TV owners along with the launch of their PC releases. Highlighting the AI capabilities of the new device he further adds that “SHIELD’s new AI home capability, we can control and interact with content through the magic of artificial intelligence from anywhere in the house.”
Image Credit: Nvidia
Image Credit: Nvidia
Shield TV will provide the first hands-free Google Assistant integration on TV and Google has optimised the experience to work on large screens. In addition to streaming, gaming and AI, Nvidia has also added the support for Shield TV users to turn the device into a smart home hub which can be used to connect to hundreds of smart connected devices at home and control them. Nvidia Spot is one interesting add-on to Shield TV as it makes the device, the backbone of the AI home with improved control throughout the house.

Tuesday, November 15, 2016

IBM, NVIDIA partner for 'fastest deep learning enterprise solution' in the world

IBM tech
IBM and NVIDIA

By Conner Forrest | November 14, 2016, 7:25 AM PST

IBM and NVIDIA recently teamed up to announce IBM PowerAI, a software toolkit that accelerates machine training for AI solutions and will boost IBM Watson's capabilities.



IBM PowerAI, a new deep learning software toolkit announced Monday, could help train computers for machine learning and AI tasks faster than ever before. The solution, a joint project from IBM and NVIDIA, runs on an IBM server built specifically for AI and uses NVIDIA NVLink as well.

According to a press email announcing the availability of the solution, IBM PowerAI is the "world's fastest deep learning enterprise solution," and it will "help train computers to think and learn in more human-like ways at a faster pace."

This new toolkit was designed to work with IBM's Power Systems LC lineup of Linux servers, specifically the IBM Power S822LC for High Performance Computing (HPC). That server has NVIDIA's latest GPU technology on board, which helps further its potential for use in deep learning and AI applications.


While this new product is geared toward "emerging computing methods of artificial intelligence" like deep learning, it also has implications for IBM's better-known cognitive computing project, Watson. IBM PowerAI will bolster Watson's enterprise AI skills by providing the platform with additional training.


"PowerAI democratizes deep learning and other advanced analytic technologies by giving enterprise data scientists and research scientists alike an easy to deploy platform to rapidly advance their journey on AI," Ken King, general manager for OpenPOWER, said in the email. "Coupled with our high performance computing servers built for AI, IBM provides what we believe is the best platform for enterprises building AI-based software, whether it's chatbots for customer engagement, or real-time analysis of social media data."

Of course, none of this matters if there aren't real-life applications for the technology. In the press email, IBM noted that technologies such as the deep learning this toolkit provides have been used for fraud protection in banks, facial recognition applications, and even in self-driving cars.

In the past, IBM Watson has been used in verticals like healthcare, legal, finance, retail, cybersecurity, and even fantasy football. So, it's possible that we will see IBM PowerAI in these fields as well.


Live Webcast: Red Bull Racing: the hard science behind Formula One

Formula One is an incredibly fast moving sport. There can be as little as a few hundredths of a second between first and second place. The speed of innovation is extreme - and with Formula One regulations becoming more stringent each year, increasing the volume and quality of simulations to test new designs becomes increasingly critical. Join Red Bull Racing as they discuss the high-tech innovations needed to achieve success and how IBM solutions help - from the drawing board to race day! And learn how you can translate this to achieve 30-50% greater throughput for your research, engineering and design workloads. Register for this webcast now!
Webcasts provided by IBM

Existing IBM customers who have IBM's Power S822LC for HPC server can access IBM PowerAI immediately at no additional charge, the email said.
The 3 big takeaways for TechRepublic readers


IBM and NVIDIA recently partnered on IBM PowerAI, a new software toolkit, to accelerate deep learning initiatives in business.
PowerAI is meant to run on the IBM Power S822LC for High Performance Computing and exists as a separate product, but will work with Watson.
IBM sees these deep learning technologies as important to verticals like healthcare, finance, retail, and self-driving cars.

Thursday, October 6, 2016

Nvidia's new Game Ready drivers are optimized for Gears of War 4, Mafia III and Shadow Warrior 2


By Shawn Knight on October 6, 2016, 2:30 PM



Nvidia on Thursday published a new GeForce Game Ready driver that’s optimized for a trio of upcoming high-profile games.

The Nvidia GeForce 373.06 WHQL Game Ready driver preps gamers for Gears of War 4,Mafia III and Shadow Warrior 2. The driver also adds an SLI profile for Iron Storm, re-enables the Battlefield 1 SLI profile that was removed over the summer and adds 3D Vision profiles for Ashes of the Singularity, Gears of War 4, Mafia III and Shadow Warrior 2 (the last two are listed as “not recommended”).

Nvidia’s latest fixes a trio of issues in Windows 10 although 17 bugs in Microsoft's current OS persist, most of which affect SLI configurations. There are also nine known issues dating back to Windows 7.



Those planning to pick up Gears of War 4 may want to hop over to Nvidia’s website as they’ve posted a helpful optimization guide. The Coalition has baked in nearly three dozen graphics settings as well as several configuration options for displays and input devices. Gears 4 is also compatible with Nvidia G-Sync technology, no doubt good news if you have a compatible monitor.

Gears of War 4 arrives on October 7 for those that purchased the Ultimate Edition and October 11 for everyone else. Mafia II, the sequel to 2010’s Mafia II, also launches tomorrow while those clamoring for Shadow Warrior 2 will have to wait until October 13.

PS - AMD just yesterday released a new set of Radeon Software Crimson Edition drivers that are optimized for Gears of War 4 and Mafia III.

Saturday, October 1, 2016

Nvidia could be readying GTX 1080 Ti for CES 2017 launch





By Tim Schiesser on September 30, 2016, 7:30 AM




A report from Chinese site Zol, as spotted by TechPowerUp, suggests that Nvidia could be preparing the GeForce GTX 1080 Ti for launch at the Consumer Electronics Show in January 2017.

The GTX 1080 Ti would be the second graphics card to use Nvidia's Pascal GP102 silicon, which was first used in the Titan X. This new report suggests that for the GTX 1080 Ti, 26 of 30 SMs will be enabled, leaving the card with 3,328 CUDA cores and 208 TMUs. In contrast, the Titan X has 28 SMs enabled for 3,584 CUDA cores.

The GPU will reportedly come with a base clock of 1,503 MHz and a boost clock of 1,623 MHz. As for the memory interface, we're expecting to see 384-bit GDDR5X providing 480 GB/s of bandwidth, attached to 12 GB of VRAM.

With this sort of specification sheet, the GTX 1080 Ti will be an expensive graphics card, especially considering the GTX 1080 already retails for $599. There's no word on exact pricing just yet, but it could end up costing $700-800 in Nvidia's current line-up. The Titan X, which is Nvidia's most powerful graphics card, already retails for a huge $1,199.

Between now and CES 2017, Nvidia is expected to launch the GTX 1050 and, if a new report is correct, the GTX 1050 Ti. Both cards will slot beneath the $250 GTX 1060 in Nvidia's mid-range and entry-level line-up. The GTX 1050 Ti will reportedly pack 768 CUDA cores, while the GTX 1050 will use 640, down from 1280 cores in the GTX 1060.

Tuesday, September 27, 2016

Nvidia job listing hints at collaboration with Apple for future Macs

Nvidia job listing hints at collaboration with Apple for future Macs


























Nvidia has been heavily focussing on its graphics card portfolio for both the consumer as well as professional segments. It has also gotten into automotive business and over the years its OEM business (where it makes integrated graphics solutions for PCs for non-gaming applications) has seen a decline.
But a recent job listing on Nvidia hints at a probable collaboration with Apple and its future Mac products. Currently Apple uses AMD graphics solutions on its Mac line of computers including MacBook Pros, iMacs and the Mac Pro which uses AMD FirePro graphics solution.
According to The Motley Fool, Nvidia posted this ad on its jobs listing page last week, which indicates that it is working on a software to allow its graphics processors to work with future Mac products. Another report on iTech Post hints at a job posting for a Mac graphics driver. All these are clear hints that Nvidia may most likely be having its own graphics solutions on future Mac products.
job-nvidia-apple_large
Nvidia job posting
Neither Apple, nor Nvidia have commented on the job listings that have been spotted. According to The Motley Fool, Nvidia has stated in the past that non-gaming graphics solutions that it has shipped in computers in the past haven’t really garnered much revenue for the company, specially when compared with its high end and discrete graphics solutions in the gaming, data centre and professional visualisation space.
Apple’s Mac lineup is up for a refresh, and it would not be surprising if it indeed goes with Nvidia graphics solutions in its future products. Whether it does go ahead with it, only time will tell.

Nvidia's Tesla P4 And P40 GPUs Boost Deep Learning Inference Performance With INT8, TensorRT Support


Nvidia Tesl P40
Nvidia Tesl P40Nvidia continues to double down on deep learning GPUs with the release of two new “inference” GPUs, the Tesla P4 and the Tesla P40. The pair are the 16nm FinFET direct successors to Tesla M4 and M40, with much improved performance and support for 8-bit (INT8) operations.
Deep learning consists of two steps: training and inference. For training, it can take billions of TeraFLOPS to achieve an expected result over a matter of days (while using GPUs). For inference, which is the running of the trained models against new data, it can take billions of FLOPS, and it can be done in real-time.
The two steps in the deep learning process require different levels of performance, but also different features. This is why Nvidia is now releasing the Tesla P4 and P40, which are optimized specifically for running inference engines, such as Nvidia’s recently launched TensorRT inference engine.
Advertisement
Unlike the Pascal-based Tesla P100, which comes with support for the already quite low 16-bit (FP16) precision, the two new GPUs bring support for the even lower 8-bit INT8 precision. This is because the researchers have discovered that you don’t need especially high precision for deep learning training.
The expected results will appear significantly faster if you use twice as much data with half the precision. Because inference operates on already-trained data, even less precision is needed than for training, which is why Nvidia’s new cards now have support for INT8 operations.

Tesla P4

The Tesla P4 is the lower-end GPU from the two that were announced, and it’s targeted at scale-out servers that want highly-efficient GPUs. Each Tesla P4 GPU uses between 50W and 75W of power, for a peak performance of 5.5 (FP32) TeraFLOP/s and 21.8 INT8 TOP/s (Tera-Operations per second).
Nvidia compared its Tesla P4 GPU to an Intel Xeon E5 general purpose CPU and alleged that the P4 is up to 40x more efficient on the AlexNet image processing test. The company also claimed that the Tesla P4 is 8x more efficient than an Arria 10-115 FPGA (made by Altera, which Intel acquired).

Tesla P40

The Tesla P40 was designed for scale-up servers, where performance matters most. Thanks to improvements in the Pascal architecture as well as the jump from the 28nm planar process to a 16nm FinFET process, Nvidia claimed that the P40 is up to 4x faster than its predecessor, the Tesla M40.
The P40 GPU has a peak performance of 12 (FP32) TeraFLOP/s and 47 TOP/s, so it’s about twice as fast as its little brother, the Tesla P4. Tesla P40 has a maximum power consumption of 250W.

TensorRT

Nvidia also announced the TensorRT GPU inference engine that doubles the performance compared to previous cuDNN-based software tools for Nvidia GPUs. The new engine also has support for INT8 operations, so Nvidia’s new Tesla P4 and P40 will be able to work at maximum efficiency from day one.
In the graph below, Nvidia compared the performance of the Tesla P4 and P40 GPUs while using the TensorRT inference engine to a 14-core Intel E5-2690v4 running Intel’s optimized version of the Caffe neural networking framework. According to Nvidia’s results, the Tesla P40 seems to be up to 45x faster than Intel’s CPU here.
So far Nvidia has been comparing its GPUs to Intel’s general purpose CPUs alone, but Intel’s main product for deep learning is now the Xeon Phi line of chips with its “many-core” (Atom-based) accelerators.
Nvidia’s GPUs likely still beat those chips by a healthy margin due to the inherent advantage GPUs have even over many-core CPUs for such low-precision operations. However, at this point, comparing Xeon Phi with Nvidia’s GPUs would be a more realistic scenario in terms of what their customers are looking to buy for deep learning applications
Related Posts Plugin for WordPress, Blogger...