Category Archives: Computing

In your time of dying, you’ll ask for a carebot, not for a nurse

Standard

Artificial empathy will exceed human abilities by 2030, because the human capacity for empathy is greatly exaggerated, especially sustained empathy for grievous suffering.

Soon a carebot will be able to comfort you 24 hours a day without weariness on its part or any need for guilt feelings on yours, and it will just get better and better at soothing you with every hour as it learns to know you and your responses.

A carebot will have infinite patience and no disgust. It will delicately use superhuman strength all day and all night, because it can be directly plugged into the power grid, and won’t need to drag around heavy batteries that need to be recharged.

A carebot will not just use the facial expressions of its patient, but all vital signs, with a built-in model of pain and suffering that few caregivers will have experienced first-hand, because they have not been tortured by life-ending disease.

 

The heat problem effectively limits real computers to the power of 2D cellular automata

Standard

According to Greg Kuperberg

it’s quite striking that real computers are very close to two-dimensional, and yet they are mostly used in a RAM machine mode with an emulation of complete circuit connectivity.

and here

On the other hand, transistors in real computers are not very far away from melting. Even though many computers look 3-dimensional, most of the geometry of a computer is within each chip of the computer, and that geometry is almost completely 2-dimensional. One reason for that is the photolithography used to make the chips. But another reason is that there is no way to carry away the heat from a 3-dimensional block of transistors. Without that problem you could sandwich many chips together in a sort-of 3-dimensional pile. The heat problem effectively limits real computers to the power of 2-dimensional cellular automata. However, this 2-dimensional geometry is mostly used to simulate a RAM machine. It cannot be an efficient simulation, but it is what happens in practice, since most higher-level languages create a RAM machine environment for software. It’s also a pain to design algorithms for a 2D computational grid rather than for a RAM machine.

According to Joe Fitzsimons

The rate at which a region of space can be cooled scales as the surface area, where as the heat produced scales as the number of irreversible gates. For a 2D array these scale in the same way, but for a 3D array the heating scales as the volume (R^3) where as the cooling scales as the surface area of a bounding box (R^2). Clearly you need to balance the rate at which heat is produced with the rate at which it is removed, and hence you have a scaling problem with 3D arrays. This is entirely independent of the cooling mechanism.

Web of cloud-genius robots

Standard

Following up to “Mobile supercomputers“.

According to Erico Guizzo

“Coupling robotics and distributed computing could bring about big changes in robot autonomy,” said Jean-Paul Laumond, director of research at France’s Laboratory of Analysis and Architecture of Systems, in Toulouse. He says that it’s not surprising that a company like Google, which develops core cloud technologies and services, is pushing the idea of cloud robotics.

But Laumond and others note that cloud robotics is no panacea. In particular, controlling a robot’s motion—which relies heavily on sensors and feedback—won’t benefit much from the cloud. “Tasks that involve real time execution require onboard processing,” he says.

And there are other challenges. As any Net user knows, cloud-based applications can get slow, or simply become unavailable. If a robot relies too much on the cloud, a problem could make it “brainless.”

Mobile supercomputers

Standard

Update: See “Virtualized Screen: A Third Element for Cloud-Mobile Convergence” by Yan Lu, Shipeng Li, and Huifeng Shen of Microsoft Research Asia.

Earlier I quoted Christopher Mims

The problem with mobile phones, says Allan Knies, associate director of Intel Research at Berkeley, is that everyone wants them to perform like a regular computer, despite their relatively paltry hardware. Byung-Gon Chun, a research scientist at Intel Research Berkeley, thinks that he might have the solution to that problem: create a supercharged clone of your smart phone that lives in “the cloud” and let it do all the computational heavy lifting that your phone is too wimpy to handle.

Now according to Brian Caulfield

Jen-Hsun Huang has always said his graphics chips were good for more than rendering explosions of zombie maniacs in videogames. In October the Nvidia chief executive got his proof when scientists at China’s National Supercomputer Center in Tianjin unveiled the Tianhe-1A, the fastest computer on earth. The beast sucks up 4 megawatts of power to forecast weather and survey mines at a speed of 2.5 quadrillion calculations per second. In it are 7,200 Nvidia graphics processors.

Now Huang wants (and needs) to put some of that power in your pocket.

and

Tegra generated less than $52 million in sales in the most recent quarter, or 6% of Nvidia’s total. Huang is promising a spate of new products next year tied to Google’s newest version of Android smartphone software. […]

Huang sees a day when mobiles with graphics cores will be able to identify objects through a camera, much like Tony Stark’s visor did in Iron Man 2. “To make that happen you need a supercomputer with all kinds of parallel-processing capability and a mobile device with parallel-processing capabilities. By connecting them you have a supercomputer in your hand,” Huang says.

Iron Man’s visor? Not very compelling. What would be some more convincing motivations for “a supercomputer in your hand”?

Intel’s 10x consolidation of data centers

Standard

According to Erik Brynjolfsson, Paul Hofmann, John Jordan

Meanwhile, companies of a certain size can get the best of both worlds by deploying private clouds. Intel, for example, is consolidating its data centers from more than 100 eventually down to about 10. In 2008 the total fell to 75, with cost savings of $95 million. According to Intel’s co-CIO Diane Bryant, 85% of Intel’s servers support engineering computation, and those servers run at 90% utilization—a combination of strategic importance and operational performance that would negate any arguments for shifting that load to a cloud vendor. Ironically, even as the utility model is being touted for computing, the highly centralized approach is becoming less effective for electricity itself: an emerging distributed power generation system features smaller nodes running micro-hydro, wind, micro-turbines and fuel cells. What’s more, many enterprises do in fact generate their own electricity or steam, for the same reasons they will continue to keep certain classes of IT in house: reliability, strategic advantage, or cost visibility.

Source: “Economic and business dimensions: Cloud computing and electricity: beyond the utility model”, Comm. of the ACM, Volume 53, Number 5 (2010), Pages 32-34.