Videos

How to build a processor for machine intelligence



Graphcore

Simon Knowles, CTO & Co-founder of machine intelligence company, Graphcore, talks at the RAAIS event on 30 June 2017 about intelligence processors.

Source

Similar Posts

6 thoughts on “How to build a processor for machine intelligence
  1. wow, thanks, interesting, especially the answer for the last question regarding the comparison FPGA and IPU

  2. 11:18 – It would be possible to make a Silicon Prism with smaller Prisms within it.

    Light of various frequencies could be reflected from a MEMS Mirror into the Prism's 'input area' and be refracted to the next 'internal Prism' each aligned in such a manner that a 'Matrix Calculation' could be performed – not a replacement for your Intelligence Processor but a Hardware Accelerator without the power limitations mentioned at 11:18.

    Subbed today, more useful comments may be forthcoming.

    Thanks for this Presentation,
    Rob

  3. Crazy deep search (later augmented with giant transposition tables) ruled the roost in computer chess from the early 1970s until just a few days ago (call it 50 years) because the type of machine we had available was just so amazingly efficient at computing in this style. The proposition here is we follow this road again: make a chip that's crazy good at a certain performance trade-off, and then exploit it to the hilt even if it proves to be an awkward fit for some of the more versatile schemes (an awkward fit on top of a ubiquitous, ruthlessly optimized library—think Google's V8—on top of ruthlessly refined hardware often goes a long way toward compensating for a poor structural fit).

    This story sometimes pans out, and sometimes doesn't. Geoffrey Hinton himself complains about the herd effect, where everyone piles on top of the best current result, in a greedy social algorithm of academic prestige (and industrial cachet). So it goes. Just when you think the algorithms are finally leading the charge again, along comes some Pied Piper hardware, clobbering some low-hanging sweet spot out of the park, and the pendulum swings right back to Silicon Follow Me.

    I'm watching this video and thinking to myself "this just might be the droid we're looking for". Is Silicon Follow Me such a bad place to be? Does history rhyme? If it does, can you spot the rhyme scheme?

    The story used to be that you couldn't invest in a radical algorithmic fit because GHz inflation floated all boats (even though this was often quite wasteful at the wall socket). One thing that makes this new world extremely different is that data centers are now super good at counting calories. Meanwhile ML remains the wild, wild west. (In a recent episode of the O'Reilly Data Show, Soumith Chintala von PyTorch confessed that he stopped reading papers in his own sub-specialty because he couldn't keep up.)

    It's a hard situation to call. I'm not yet ruling out another Follow Me node at the opposite ratio, with huge amounts of cold memory stacked onto an HBM/TSV interconnect with relatively less compute:

    https://arxiv.org/pdf/1302.1940.pdf

    What will be the final shape of the silicon DNN efficiency frontier? That's the $6.4 trillion dollar question of the all-too-soon-too-late-for-hindsight calendar decade 2020.

    https://en.wikipedia.org/wiki/Efficient_frontier

  4. Immediately after my previous comment, I sat down with Tim O'Reilly's new book WTF: What's the Future and Why It's up to Us (2017) where I find myself reading about Michael Schrage's maxim "Who do you want your customers to become?"

    Uber and Lyft are asking their consumers to become the kind of people who expect a car to be available as easily as they had previously come to expect access to online content. (p. 58)

    Given the highly specific compromises inherent in this architecture, I expect Graphcore to maintain a hard-core press on end-user education. Which is probably why this talk (unlike, say, the original Intel Optane announcement) contains enough actual meat for me to have written my previous comment.

    Optane was first announced in July 2015 and we're still in WTF territory on their forthcoming NVDIMM variant.

    https://www.anandtech.com/show/12041/intel-to-launch-3d-xpoint-dimms-in-2h-2018

    Intel claims that they are on track to launch 3D XPoint memory modules in the second half of 2018. They are projecting that 3D XPoint DIMMs (Optane memory, or Optame for short) will be an $8B market by 2021.

    This will be pretty amazing if it pans out, because their SSD product still doesn't exceed the DWPD rating of their flash competitors. Unless that's a cagey close-to-the-vest artificial ceiling, Optame is going to be grotesquely unsuitable for 90% of compute-intensive workloads. Which could still work out to $8B, and probably needs to, because otherwise I don't see the CPU-embedded Optame firmware—kiss the CPU you bought yesterday goodbye—making quick enough initial inroads.

    Intel claims that they are on track to launch 3D XPoint memory modules in the second half of 2018. They are projecting that 3D XPoint DIMMs will be an $8B market by 2021.

    But here, with Graphcore, where it appears necessary (at first glance) to get the DNN architects on board for an express departure, we're not so likely to thrall for years to the desiccated ××× 10/100/1000 specifications of the Santa Claran PR Promise Keepers (apparently, all those RoHS whalebone bunny corsets coerce compulsory corporate constipation—what did you think frisky Sand People wore under those dead sexy aluminum bags to get through the day without a rest break or happy thought?)

    https://www.bloomberg.com/gadfly/articles/2017-03-22/chipzilla-intel-toppled-by-taiwan-s-supplier-to-the-stars — March 2017

    Something momentous happened quietly in the global semiconductor industry this week, and it deserves a bit of attention. Taiwan Semiconductor Manufacturing Co. topped Intel Corp. in market value. … To understand how Chipzilla was surpassed one need only to look at the sandpits in which the two companies play.

Comments are closed.

WP2Social Auto Publish Powered By : XYZScripts.com