Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Google wins U.S. approval for radar-based hand motion sensor (reuters.com)
134 points by Cieplak on Jan 2, 2019 | hide | past | favorite | 56 comments


> A loud clatter of gunk music flooded through the Heart of Gold cabin as Zaphod searched the sub-etha radio wave bands for news of himself. The machine was rather difficult to operate. For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitive--you merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure, of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same program.

> Zaphod waved a hand and the channel switched again.

http://www.technovelgy.com/ct/content.asp?Bnum=1329


FWIW this is from the ATAP group @ google, which is DARPA -> Motorola -> Google. Not Google X/moonshot/rollerblades.

https://en.wikipedia.org/wiki/Google_ATAP

I've seen the tech first-hand and it's super cool.


Probably worth mentioning prior discussion of controversy[1] mentioned on the Wiki page.

[1] https://news.ycombinator.com/item?id=18566929


Thank you for mentioning this. My initial reaction when I saw this was a chilling sadness of "who did Google X screw over to get this?"

I hope the whole Google X nonsense gets scrapped. Doing things like they do has a real chilling effect on sharing of ideas.


Can you expand on this comment a little more? Why does it matter which division of Alphabet corp makes something?


YouTube can censor your videos, but Google fiber can’t, though they are under the same umbrella.


Censorship isn't really a product. Also a website censoring itself vs an ISP censoring a website can be thought of as different actions. A website developing a hand sensor vs an ISP developing a hand sensor seem essentially the same. (Censor vs sensor pun not intended.)


Having spent a fair bit of time looking into applications for gesture inputs (kinect and leap motion mostly) I can tell you the hardest part by far will the user experience design. Gestures are extremely unintuitive on their own. Users need really clear prompts and/or training to understand how to use them. And if you want to innovate and create new and subtle gestures as your product evolves, it only gets worse.

The leap motion is already pretty good and has some useful applications, but it's still very, very niche. The Soli looks like a real evolution of the technology in terms of both precision and how embeddable it is, but it's going to have the same challenges in user adoption. I'd expect this kind of thing to get more traction in industry than in people's homes.


I've worked in this space for educational software. You hit the nail on the head. Detecting what people do with their hands is easy. Building a UX out of that is hard.


I figure the only reliable way to extract gesture intents is to define them abstractly (as a set of drawings or what have you), have a large number of people try to execute them, and then design/learn the detector based on that.


Concur.

Even screen based gestures, rather simple ones, when they are not standard or common, tend to be difficult to acquire.

Every time I switch phones, I'm lost in the semantics, and don't care to learn the new ones so much.

It's almost a matter of standards than anything - if everyone could really agree what '3 fingers this or that' meant ... we'd already all be doing it.


Microsoft Hololens does a great job with gesture input onboarding and discoverability/usability in context of virtual keyboard use.


Video from Google about the technology (2015): https://www.youtube.com/watch?v=0QNiZfSsPc0


Go big or go home. That's one thing I like about Google/Alphabet. Not being afraid to try completely new things outside the normal comfort zone of their traditional product space, and aiming for the most radical potentially game changing ones at that.

I'm glad that Sergey and Larry have been able to stay at the helm, keeping Google/Alphabet from devolving into just another short term quarterly earnings chasing company and continuing to set up institutions in place that will ensure the culture of innovation which has made the company so successful lasts long into the future.


Go big, let your product stagnate for a couple of years coz it doesn't improve adsense dollar revenue in a direct way and then go home.

reminder that inbox is getting the shotgun to the head this year while gmail still feels like it's stuck in the early 2010s and that's just the most recent one.

any area that google dominates in nowadays feels almost accidental, like they don't actually want to dominate that area but the alternatives don't have their datacentres and thus aren't as good-for eg youtube.


Popular wisdom is that google culture rewards too much the creation of new projects, but not as much their maintenance - which looks comparatively worse in a CV or when being considered for raises. Since they also hire mostly high-achievers, the result is that everyone wants to move into new things and "old" products are soon left to wither.

I'd be curious to know the opinion of actual Googlers about this common theory.


I've yet to see a single explanation why Google is supposed to maintain a non-profitable project/product indefinetly.

If Inbox (or other projects commonly coming up on these whines) would be a startup, it would end up on https://ourincrediblejourney.tumblr.com/ a long time ago. Why would Google keep maintaining and burning money on an unsuccessful free (!) product? Isn't "fail fast" the main praised mentality of startups here?


I wouldn’t expect anyone to maintain something unprofitable indefinitely, but Google’s reputation is to offer something free long enough to destroy any existing market and then kill it when there’s no one left to fill the void. It’d generally be better if they never offered it at all.


A lot of these projects Google has killed were very obviously going to be non-profitable from the start because they had no business model.

Take Allo or Wave for example, how was a messaging app with no ads supposed to make any money even if it was successful? So why did they even bother in the first place if they're going to kill non-profitable projects?


This is, sadly, an accurate comment. Although to be fair to Google, they ported snoozing to regular Gmail and polished the UI a little.


What exactly is stagnating in your opinion? The Google Ads platform is currently one of the primary means of generating steady income for funding research and development efforts into new future technology, innovation, and growth.

I personally believe focusing so much on advertising is a short term narrow minded way of judging Google/Alphabet as the company is pretty much a fully fledged conglomerate at this point. Even so, the growth of advertising revenue has consistently increased 20-30% for almost 15 years now. Considering it's sheer size, that's definitive and definite success regardless of your qualms with how Google culture encourages employees to try new things even if they might fail. In fact, though it may seem random, it is exactly because Google/Alphabet is so willing to try and do new things that it can succeed in so many immensely different industries and markets from molten salt energy storage all the way to pest control using genetically modified mosquitoes.


>What exactly is stagnating in your opinion?

All of google's email software offerings, all their chat software offerings, their AR platform offerings, their VR platform offerings, does Android things even exist anymore beyond a toy development kit?

i don't even bother with google products anymore unless i see 2 years of concrete support & development. google product early adoption does not pay off


Very cool demo video.

Wonder if phone camera + AI software can do equivalent demo today?

Training to just recognize certain type of fingers movement seems trivial for AI now.


Interesting. This might finally enable those hologrammatic user interfaces as shown in many sci-fi movies.


Interesting tidbit:

The Federal Communications Commission (FCC) said in an order late on Monday that it would grant Google a waiver to operate the Soli sensors at higher power levels than currently allowed.


Which is how many mW at which frequency (~ 60GHz according to article)?


I want this power


One area in which I think this could be big: deaf culture.

Instead of a device being triggered by voice, you can trigger commands by spelling them out via ASL and hoping NL can predict wisely or give choices -> wait for users response signal.


There's an app for that.

https://motionsavvy.com/uni.html

The OP is more about 3-d recognition of mimed actions like turning dials and pushing buttons (Virtual Reality applications) than recognizing symbols.


I don't think something like this will be capable of handling ASL any time soon.


> Facebook Inc (FB.O) raised concerns with the FCC that the Soli sensors operating in the spectrum band at higher power levels might have issues coexisting with other technologies.

Why did Facebook have a say in this? Are they building one too?


I'm not really sure about this in US, so take it with a grain of salt. In my country anyone can raise concerns before something like this is approved.


It's roughly the same in the US. Typically when changes are made to a government accepted standard/radio frequency rules all of the currently participating organizations are given a chance to review/debate changes.


As an aging techie this is just the kinda new thing that will probably terrify me in 10 years from now!

Just like the desktop GUI confounded our grandparents... we'll be faced with a machine that recognizes gestures.

Not wanting to do something unintended, we'll want to avoid these things just like our ancestors!


A few years ago Google released a video about controlling Gmail with gestures, as an April Fool's joke...


how precise are those sensors? The videos show use cases where gestures are used as a selector , but how does this perform when drawing for example?


Google already has a pretty good idea of what you are doing at all times using the sensors in your phone... this will give them even more information about you... I hope at least that this new tech won't be able to read you from the next room (through walls).


Imagine this in a bracelet form factor that puts multiple sensors in a loop around your forearm which it uses to locate and sense the position of the opposite hand.

VR/AR use cases suffer from poor input controls; this might be a better approach.


So with this technology will Google be able to scan anything in the room, including the gestures? If this technology is used the same way microphones are used in IOT devices like Alexa I have some concerns.


I have this this exact technology documented from 2014, except my idea uses RFID. Could literally make anything touch-enabled, included existing touchless monitors.


57 GHz? What sort of parts are they using to get a consumer product BOM cost?



Screw Infineon. They won’t provide the data sheet for their 80 GHz chipset, despite my buying $5k worth through their distributor.


Infineon support is truely world-class awful.

I’m looking at 100k with them this year and can’t get an answer to any technical question in any time frame. I probably wouldn’t select them again.


It's a custom-built chip according to https://atap.google.com/soli/


No idea what they are using but TI makes a bunch of parts that do radar in the MM wave band which is what that frequency is in.

http://www.ti.com/sensors/mmwave/overview.html


So I just looked up the prices on Mouser and Digikey for an amplifier or mixer in those frequencies, and boy are they expensive. The R&D spend must be enormous for them not to be making those parts in these frequencies as separate bits.


I mean, you can make a RADAR with a single transistor, 2 diodes, an exotic but off the shelf VCO, and clever transmission lines. I doubt the R&D cost is very high for for the MMIC here.


TI is making a 60 GHz RADAR in a CMOS process. I’m sure R&D costs were heavy, but production will be cheap.

https://www.eetimes.com/document.asp?doc_id=1333330


I find gestures exhausting.


Why do they even need approval? Isn’t it really short range?


The article covers this in detail, was there something that wasn't clear?


I remember gesture controlled light switches, and other home automation being a thing at least since eighties.

Yet again, this fires up my skepticism about soundness of American patent system.


Detecting some motion with a sensor and detecting precise motion using radar is very different. It’s a novel use of technology, and I’m for one perfectly happy with this kind of invention being patentable.


Do you understand that the difference there is not much bigger than detecting motion gesture, and detecting notion gesture and claiming doing that precisely?

Both things require mm waveradar and some non-trivial DSP functionality to recognise gesture from clutter, and both have to be calibrated to significant precision.

This is the that glaring flaw of current patent system where there can be 20 very similar products, but all covered by different patents with only difference being verbage on the patent.


What are some of these gesture-control systems that you remember? All I've seen are The Clapper and motion sensors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: