The recorded world: Every step you take

As cameras become ubiquitous and able to identify people, more safeguards on privacy will be needed

“THIS season there is something at the seaside worse than sharks,” declared a newspaper in 1890. “It is the amateur photographer.” The invention of the handheld camera appalled 19th-century society, as did the “Kodak fiends” who patrolled beaches snapping sunbathers.

More than a century later, amateur photography is once more a troubling issue. Citizens of rich countries have got used to being watched by closed-circuit cameras that guard roads and cities. But as cameras shrink and the cost of storing data plummets, it is individuals who are taking the pictures.

Through a Glass, darkly

Some 10,000 people are already testing a prototype of Google Glass, a miniature computer worn like spectacles (see article). It aims to replicate all the functions of a smartphone in a device perched on a person’s nose. Its flexible frame holds both a camera and a tiny screen, and makes it easy for users to take photos, send messages and search for things online.

Glass may fail, but a wider revolution is under way. In Russia, where insurance fraud is rife, at least 1m cars already have cameras on their dashboards that film the road ahead. Police forces in America are starting to issue officers with video cameras, pinned to their uniforms, which record their interactions with the public. Collar-cams help anxious cat-lovers keep tabs on their wandering pets. Paparazzi have started to use drones to photograph celebrities in their gardens or on yachts. Hobbyists are even devising clever ways to get cameras into space.

Ubiquitous recording can already do a lot of good. Some patients with brain injuries have been given cameras: looking back at images can help them recover their memories. Dash-cams can help resolve insurance claims and encourage people to drive better. Police-cams can discourage criminals from making groundless complaints against police officers and officers from abusing detainees. A British soldier has just been convicted of murdering a wounded Afghan because the act was captured by a colleague’s helmet-camera. Videos showing the line of sight of experienced surgeons and engineers can help train their successors and be used in liability disputes. Lenses linked to computers are reading street-signs and product labels to partially sighted people.

Optimists see broader benefits ahead. Plenty of people carry activity trackers, worn on the wrist or placed in a pocket, to monitor their exercise or sleep patterns; cameras could do the job more effectively, perhaps also spying on their wearers’ diets. “Personal black boxes” might be able to transmit pictures if their owner falls victim to an accident or crime. Tiny cameras trained to recognise faces could become personal digital assistants, making conversations as searchable as documents and e-mails. Already a small band of “life-loggers” squirrel away years of footage into databases of “e-memories”.

Not everybody will be thrilled by these prospects. A perfect digital memory would probably be a pain, preserving unhappy events as well as cherished ones. Suspicious spouses and employers might feel entitled to review it.

The bigger worry is for those in front of the cameras, not behind them. School bullies already use illicit snaps from mobile phones to embarrass their victims. The web throngs with furtive photos of women, snapped in public places. Wearable cameras will make such surreptitious photography easier. And the huge, looming issue is the growing sophistication of face-recognition technologies, which are starting to enable businesses and governments to extract information about individuals by scouring the billions of images online. The combination of cameras everywhere—in bars, on streets, in offices, on people’s heads—with the algorithms run by social networks and other service providers that process stored and published images is a powerful and alarming one. We may not be far from a world in which your movements could be tracked all the time, where a stranger walking down the street can immediately identify exactly who you are.

Read more . . .

Also: Amazon Echo: Tips to Protect Your Privacy and Online Safety


The Latest on: Safeguards on privacy

via  Bing News


Toyota’s hyper-radical FV2 concept pushes personal transportation boundaries


Toyota’s FV2 Concept – part robot, part computer-human interface, part motorcycle, part car

One of the many themes of the FV2 is the expression of Toyota’s “Fun to Drive” philosophy

Toyota’s already bold pursuit of new vistas in the realm of personal transportation took another quantum leap forward today, when the Japanese giant released details of the FV2, a concept car more closely related to the Kirobo humanoid communication robot than any vehicle currently on public roads.

In trying to explain the FV2 succinctly, it’s probably best to start with how it isn’t different from a contemporary car. It has four wheels. That’s about it, and what’s more, it rearranges those four wheels in a diamond shape and it tilts in corners, a bit like a motorcycle with giant training wheels on each side.

The FV2 can be driven from a seated position with the canopy closed, or from a standing position with the canopy open, with the transparent canopy becoming a full-height windshield with an extensive augmented reality display.

In both cases, the vehicle is steered, accelerated and braked by body movement.

It’s not the first Toyota to use an external high-resolution display on its exterior, with the FUN Vii doing the show rounds for the last two years after being shown at the Tokyo Motor Show in 2011. Toyota’s experiments with expressing the driver’s emotions on a vehicle’s exterior date back more than a decade to the Personal Mobility Concept of 2003 and the POD concept of 2001, and the company patented this feature in 2002.

One of the many themes of the FV2 is the expression of Toyota’s “Fun to Drive” philosophy, and the computer-human interface we first experienced with the Segway and its natural weight-shift steering has been incorporated into the FV2 to create a greater physical bonding between car and driver.

As cars and robots converge, advanced technologies will also be used to enhance the driving experience by connecting emotionally with the driver, and the FV2 is the first vehicle to incorporate some of the lessons learned in the Toyota Heart Project, a new communication research study featuring the well-known Kirobo and Mirata humanoid communication robots.

Robots are being developed for many uses, and Japanese robotics research is well advanced in the area of companion robots using artificial intelligence plus voice analysis, image recognition of facial expressions, body movement and hand gestures to respond in such a way as to create an emotional connection between humans and robots.

Read more . . .



Go deeper with Bing News on:
Companion robot
  • Eight cute and kitschy robots from CES 2020
    on January 14, 2020 at 3:46 am

    Smart-home technology and innovative cars were overshadowed at CES by tiny robots designed to be human companions. We've rounded up eight of the kitschiest offerings at this year's event. Known as the ...

  • Smart rehabilitation devices, companion robots showcased at CES 2020
    on January 14, 2020 at 12:25 am

    Exhibitors from the Netherlands, France and South Korea showcased many types of smart rehabilitation devices and smart robots for accompanying elderly people or patients at home at CES 2020, with ...

  • This bionic cat could be a purrfect companion — Strictly Robots
    on January 13, 2020 at 10:18 am

    Mashable is a global, multi-platform media and entertainment company. Powered by its own proprietary technology, Mashable is the go-to source for tech, digital culture and entertainment content for ...

  • Canon's Robotic Camera System controls multiple DSLRs from afar
    on January 12, 2020 at 1:05 pm

    Canon has introduced a CR-S700R Robotic Camera system that lets you remotely steer multiple DSLRs from a ... if you believe Canon -- it allows for a "more compact and lightweight" design that doesn't ...

  • The best robot vacuums
    on January 10, 2020 at 2:15 pm

    Nobody likes cleaning, but it must be done, so why not enlist a robot vacuum cleaner to do it? These are the best robot vacuums you can buy in 2020.

Go deeper with Google Headlines on:
Companion robot
Go deeper with Bing News on:
Go deeper with Google Headlines on:

The AquaTop Interactive Display System


AquaTop Display is a projection system that uses white water as a screen surface.

This system allows the user ’s limbs to freely move through, under and over the projection surface. Using the unique characteristics of fluid, we propose new interaction methods specific to the projection medium: water. Our system uses a depth camera to detect input on and over the water surface to allow for interactions such as protruding fingers out from under the water surface and scooping up the water with both hands. These type of interactions are not normally possible with current impenetrable rigid surfaces. For example, by floating one ’s limbs on the water surface, it is also possible to fuse one ’s body with the displayed objects for further augmented interaction by ’becoming one ’with the screen.

The Latest on: Interactive Display System

via Google News and Bing News

Leap Motion gesture controller released at last


This looks like some exciting technology, even if it is a version 1.0 product.

Hot on the heels of the Leap Motion Controller, which began shipping last week, Leap Motion has released the accompanying software. The software allows people to control their computers with natural movements, detecting both hand and finger movements. In addition, the company launched its Airspace store which includes apps specifically designed for use with the device.

The apps available on the Airspace Store are designed for the motion sensor technology. The store currently has more than 75 free and paid apps that cover a broad range of categories, from educational apps to (of course) plenty of games. More apps are expected and the collection should grow as more developers are brought on board.

You’ll need a fairly modern Mac or PC for the Leap Motion Controller to work. It requires a Mac capable of running OS X 10.7 or higher, or a PC with an Intel Core i3, AMD Phenom II, 2 GB of RAM and Windows 7 or 8. Apparently it won’t just be for desktops either. The computer maker ASUS is partnering with the company to embed the sensor in high-end laptops and All-in-One (AIO) systems.

Read more . . .

via Gizmag – Brian Burgess

The Latest Streaming News: Gesture controller updated minute-by-minute

Bookmark this page and come back often

Latest NEWS


Latest VIDEO


The Latest from the BLOGOSPHERE

Leap Motion previews its gesture control magic on Windows


Leap Motion is on its way.

With the clock ticking down to the PC gesture controller’s July 22 launch, Leap has a brand new teaser video that showcases the device’s interaction with Windows. If you’d forgotten how exciting Leap was when we first got the chance to play with it, this might be enough to get your blood pumping again.

The clip (which you can watch below) shows hand gestures replacing mouse pointer or multitouch in a variety of Windows apps. For all of the flak Microsoft has taken for Windows 8, Leap and Windows 8 look like a perfect match. All of that multitouch capability – a point of contention for many PC owners – fits like a glove with Leap gestures.

In case you somehow missed the Leap Motion bandwagon, we’re looking at a tiny gizmo that rests next to your PC (desktop or laptop). Think a pint-sized Kinect that allows you to control your computer with mid-air waves, points, and other sleights of hand. It’s extremely sensitive, letting the tiniest of movements register onscreen. Cue the inevitable Minority Report comparisons.

Read more . . .

via Gizmag – 

The Latest Streaming News: Gesture control updated minute-by-minute

Bookmark this page and come back often

Latest NEWS


Latest VIDEO


The Latest from the BLOGOSPHERE