Courtesy Victor van WerkhoovenWhat are the most popular PIN numbers people use? What are the least? How can you best ensure that your PIN is something crooks would have a lesser chance of figuring out? This is a pretty neat report that talks about what works, and what doesn't, when selecting a personal identification number to use with some financial or electronic devices. You'd be surprised how often people commonly make mistakes that lead to easy uncovering of their PINs.
Let’s be honest with ourselves: who doesn’t love a good temporary tattoo? There’s something glorious in that
Courtesy J. Rogers, University of Illinoissmall square of paper promising instant, kid-appropriate street-cred; in the anticipation that builds with the 60 agonizing seconds it takes to hold a wet washcloth against your upper arm until you can’t stand it anymore and just have to peek.
Now what if, embedded in your two-day skull and crossbones, there was a computer? One that was soft and pliable and thinner than a strand of hair, and gave your doctor data about your heart function, brain waves, and muscle activity?
Well, wonder no longer, because now it’s possible with the help/distribution of a company called mc10. Will the wonders of nanotechnology never cease?:
According to a recent article by the National Science Foundation: “One of the advantages of the newly created epidermal electronic systems is easy on / easy off application. As this video shows, the electronics have the right physical properties--such as stiffness, bending rigidity, thickness and mass density--to perfectly match to the epidermis.
Courtesy J. Rogers, University of Illinois The systems seamlessly integrate and conform to the surface of the skin in a way that is mechanically invisible to the user, and the devices have the potential to provide a range of healthcare and non-healthcare related functions.”
Anyone else’s Hey!-Wait-Just-a-Minute Alarm go off on “non-healthcare related functions?” A temporary
Courtesy J. Rogers, University of Illinoistattoo/computer tasked with non-healthcare related functions like what, exactly? Curious.
On the up-side, it appears that there might be some fascinating future uses. Again, the National Science Foundation: “The researchers are also exploring clinical approaches, particularly for ailments where sensor size is critical, such as sleep apnea and neonatal care.
“Much further into the future, the researchers hope to incorporate microfluidic devices into their technology, opening up a new arena of electronic bandages and enhanced-functioning skin, potentially accelerating wound healing or treating burns and other skin conditions.”
A 1956 computer hard drive. 5 megabytes -- about what you get on a thumb drive -- and weighing one ton.
Courtesy Yutaka Tsutano
I have been waiting for the new iPod Touch. I want a display screen so sharp, it looks like a photograph. The "retina display" creates an image out of pixels that are only 78 nanometers. How small is that? Well, more than 300 of these pixels are packed in each inch. Supposedly this is the limit for human perception, or as some fanboys might say, "It doesn't get any better than this!"
University of Michigan researchers can do better, though, Their paper in Nature Communications titled, Plasmonic nanoresonators for high-resolution colour filtering and spectral imaging explains how pixels of only 10 microns can be produced.
Such pixel densities could make the technology useful in projection displays, as well as wearable, bendable or extremely compact displays, according to the researchers.
The resonators are kind of like a light filter. Two nano thin layers of metal selectively allow light to pass through small sets of slits. The slit spacing determines which wavelength of light makes it through the slits.
Red light emanates from slits set around 360 nanometers apart; green from those about 270 nanometers apart, and blue from those approximately 225 nanometers apart. The differently spaced gratings essentially catch different wavelengths of light and resonantly transmit through the stacks. LinuxForDevices.com
These displays are simpler, use fewer parts, are more efficient, and should be cheaper to make. I am not going to wait, though.
Apple is amazing. Just amazing. They make a computer with apposutely no viruses, they are the creaters of the farms iPod, they release a Hans held touch screen computer like iPad, and they are the maker of the best smart phone ever, but they still get critized for every little mistake! For example if you cover up a part of the new 4th genoration iPhone, you get bad reception! People were very angry at them but they didn't deserve it! They had made an awesome phone with TWO cameras!!! All you needed was a little case anyway! Also everyone critizes their macs for being to expensive! They are pretty much perfact so what do you expect? Apple is a great company and they don't deserve the critizsem that they are getting
Graphene is a single atom thick layer of carbon atoms in a honeycomb like arrangement (read more about graphene here in ScienceBuzz.org)
Transistors are like valves that can turn the flow of electricity off and on. Computers can use transistors and logic circuits to solve all kinds of problems. These problems can be solved faster if the transistors can turn on and off faster. Transistors made out of graphene now can switch on and off 100 billion times per second (100 GigaHertz). State-of-the-art silicon transistors of the same gate length have a switching frequency of about 40 GigaHertz.
IBM just announced their breakthrough in the magazine Science.
Uniform and high-quality graphene wafers were synthesized by thermal decomposition of a silicon carbide (SiC) substrate. The graphene transistor itself utilized a metal top-gate architecture and a novel gate insulator stack involving a polymer and a high dielectric constant oxide. The gate length was modest, 240 nanometers, leaving plenty of space for further optimization of its performance by scaling down the gate length. ScienceDaily
Courtesy Arnold Reinhold Oct 15, 1956, John W. Backus published a manual explaining a new way to program computers.
“John Backus and his Fortran project members almost single-handedly invented the ideas of both programming languages and (optimizing) compilers as we know them today." Wired
Instead of compiling complex machine code which tooks weeks, Fortran code could be written in hours and was much easier.
I was even able to learn Fortran back in the late 60's. It even satisfied my foreign language requirement!
More than half of people over 60 have a hearing loss (I am in that group). The demand for lip reading skills is driving technology. I foresee that we will soon have portable devices that will "read lips" and either show the words on a display or if the person is deaf and blind it could produce tactile symbols (braille) on a touch pad.
A research team from the School of Computing Sciences at UEA compared the performance of a machine-based lip-reading system with that of 19 human lip-readers. They found that the automated system significantly outperformed the human lip-readers – scoring a recognition rate of 80 per cent, compared with only 32 per cent for human viewers on the same task. Science Daily
By analyzing results of computerized recognition of facial speech patterns, researchers hope to produce better visual speech synthesis. Computer generated "talking heads" are being evaluated to create the most intelligible and visually appealing system.
Courtesy Kurt SeebauerThis has been in the news recently, but it didn’t occur to me until just now that it really has a place on Science Buzz.
Alan Turing was an English mathematician, and one of the fathers of computer science. He developed some of the earliest computers, and created the very first designs for a “stored-program” computer (a computer that keeps data and instructions inside of it, as opposed to one that required the operator to input every step.)
He was also interested in artificial intelligence, and proposed an experiment called the Turing test, meant to determine if a machine was truly intelligent. (Basically, a computer that could fool a human into thinking that he or she was talking with another person would pass the Turing test.)
Turing was also a code breaker, which is where the “war hero” part comes in. The day after the United Kingdom entered World War 2, Alan Turing went to work for the Government Code and Cypher School, an organization meant to break enemy codes. At GCCS, Turing and his colleagues developed automatic code breaking machines to decipher the elaborately encrypted messages of the Axis forces.
Turing’s work in collecting German military secrets through code breaking has been said to have shortened WWII by as much as two years, saving thousands of lives.
Alan Turing was also gay, and when he admitted this to the police after his home was broken into, he was charged with “gross indecency,” a law that essentially made homosexuality a criminal offense. Turing was given the choice of going to prison or accepting probation on the condition that he undergo chemical castration. Chemical castration involves the administration of drugs that change the subject’s hormone balance. This can cause the loss of sexual drive, as well as loss of hair, and muscle and bone density.
Two years after his conviction, Alan Turing killed himself.
It was a pretty awful way to treat someone who had contributed so much to the peace and safety of the world, as well as to the revolutionary discipline of computer science. This month the British government finally issued an apology to Alan Turing, acknowledging the scientist’s great contributions to humankind, as well as the shameful way he had been treated by his own government.
So there you go. Let’s not let it happen again.
Courtesy aNantaBI think there’s a pretty good chance that the robots that control the Internet will censor this, so read fast, Buzzketeers.
Web 3.0, or as I like to call it, “Skynet,” is looming closer than ever today. It’s like a big old thunderhead, gray-black, full of lightning, and bearing down on us from above. Except when Skynet 3.0 gets here we aren’t getting wet, we’re getting turned into the freakin’ Borg. (Although we might also get wet, if there are real clouds around too.) And as cool as it would be to have a drill hand and a laser pointer taped to my head, I don’t think I could stand being any more pale. And so we must fight. For my sake.
This week the blossoming threat is taking the form of cleverer computers. Computers with huge, muscular, brick-breaking vocabularies. Computers that don’t just know all the words—they know what all the words mean. Computers like robotic English majors, except they can also do math and get jobs.
See, a “semantic map” has been developed for computers—a program that would allow a computer to “understand” words based on their tenses and contexts. A direct application of this technology would be in search engines; instead of being limited to searching for exact word matches, the program could look for something based on the meaning of your words. For instance, if I were to type “The terminator learns some bodacious new phrases” into Google’s search engine right now, I’d get a bunch of worthless nonsense returned to me. But with an engine that used a semantic map, I like to think that such a search would return to me with some clips from Terminator 2, in which John Connor is teaching the T101 some useful new phrases like “Hasta la vista,” and “hands off the merch, bro,” and “Cheese it! It’s the fuzz!” I could then post these clips on a science blog.
And that, incidentally, is the only positive scenario I could imagine coming from this technology. If I’m searching for, say, “sexy Easter bunny,” I just wants me some pictures of the Easter bunny in a speedo—I don’t want my computer to actually understand what’s wrong with me. And there’ll be no escaping their powerful understanding. Even this early semantic map is said to have a vocabulary 10 times the size of the average college graduate. That’s, like, super… not good.
There is still hope, however, so relax your little fret glands for a moment. I have a plan, and you already know my plan if you read the heading of this post: invent new words. But keep them to yourself—I wouldn’t underestimate Web 3.0 so much to think that it couldn’t figure out what was going on after a while. Also, be sure to change the definitions of already existing words, and change them often. It will be like linguistic guerilla warfare; a definition could pop up in one spot (word) fire off a couple shots, and then it would be gone, already looking for a new hiding place. Web 3.0 might send an air strike against this whole paragraph, but it won’t matter—by the time the missiles get here, the passage will mean something else entirely. My meaning will be setting up a new camp, hopefully in a stand of old swear words.
Are you with me, folks? I knew I could count on you! Progress won’t get us that easily!