Don’t you ever wish you could just open your favorite browser, type a question (in natural language, not computer speak) in the input box, then wait a millisecond for the right answer? Or, better yet, just turn on the computer and verbally ask a question and wait for a response (think the Starship Enterprise here)? Spencer Tracy and Katharine Hepburn dealt with the “information search & retrieval” problem in the movie “The Desk Set” from 1957 in which a giant computer was brought in to supplement a staff of librarians, with the thought that the computer would aid in efficiency. The computer took your question, submitted on a sheet of paper, did some crunching in the background, then spit out the answer - correctly (most of the time). Maybe even better than that would be something like a virtual librarian (think the one in Neal Stephenson’s 1992 book Snow Crash), an avatar who takes your question then pilfers through, presumably, yottabytes of data in milliseconds and gives an answer. Of course, the avatar (nothing more than code brought to life) is incapable of thinking, which is where the real problem lies. We haven’t realized Stephenson’s or Gene Roddenberry’s vision yet, but plenty of folks are working on it. To get a glimpse of the current status on this front as well as where we might be headed, check out “The Ultimate Answer Machine” in the Aug. 6th issue of InformationWeek or read it online here (same article, different title).
Archive for August, 2007
At the moment, all that stands in the way of the America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science Act (that’s long for the America Competes Act) is the President’s signature. Passing by a wide margin in the House and then sailing through the senate, the 18 month long legislative effort appears to be on the verge of becoming the next big step in American science & technology competitiveness. According to Titles V and VII of the Bill (H.R. 2272 for those hardcore legislative watchdogs), DoE and NSF are to be allocated more than $30 billion, combined, between 2008-2010. It’ll be interesting to see if the America Competes Act affects the outcome of the Sowing the Seeds Through Science and Engineering Research Act (H.R. 363) passed by the House in April. If you want to know how the funds for all the agencies are to be appropriated (at least at present), you probably should read the enrolled Bill (the one passed by both the House and Senate), since there are four other versions of it. Kudos to the Tech Policy Summit and News.com for their info.
Are you willing to sacrifice some privacy for personal convenience or benefit? As digital technologies continue to evolve, creating, storing, moving, and sharing digital data and information more efficiently and more securely become the foci of innovation. The dilemma presented is that, while data and information become easier to manipulate, they inherently also become easier to access - by both you and others. Invariably, this is a social problem as much as a technological one, so some argue that such innovation just exacerbates existing social ills. Such is the case with RIFD (radio frequency identification) for tracking everything from products to pets to humans, and its potential socio-economic implications for medical care, national defense/security, and even religion. Want your medical information or other personal information with you all the time? This CNN article from yesterday provides good context to the issue.
We all know the glamour of having the fastest HPC machine, or the most nodes, or fattest pipes. But what ends up lost in the hoopla of all the hardware hype is the fact that someone has to write the code for this stuff to be even marginally useful for handling enormous computations. Herein lies one of the problems with high performance, scientific computing - not enough skilled programmers. Simply put, software development isn’t keeping pace with hardware development. This has been a problem for some time and still is. Writing code and programming applications (from middleware to debuggers) that enable a large computational, data intensive problem to be broken into parts that are solved individually and then reassembled into a single solution is non-trivial. Though a little dated, Susan Graham and Marc Snir, of Cal Berkeley and Illinois, Ubana-Champaign respectively, touched on this still relevant problem in their February 2005 CTWatch Quarterly article “The NRC Report on the Future of Supercomputing.” Gregory Wilson, a CS professor, gets a little more specific in “Where’s the Real Bottleneck in Scientific Computing?” from American Scientist. A more recent discussion of the lag in software development can be found in Doug Post’s keynote talk “The Opportunities and Challenges for Computational Science and Engineering” from the inauguration of the new Virtual Institute - High Productivity Supercomputing (VI-HPS).