Hacker Newsnew | past | comments | ask | show | jobs | submit | y7r4m's commentslogin

About a decade ago I somehow came across Genudi[0], a markov chain based "AI", and had quite a bit of fun with it. The creator has a blog that makes for an interesting reading session.

0: https://www.genudi.com/about


It would be awesome if we were able to get more things besides vapes (and apparently some flashlights; I assume there are many niches where they are common) to use 18650 or even 21700 li-ion cells. I see most people I know buy AAs by the pallet and go through them regularly for their controllers, led lights, kids toys, etc.. and few I believe bother to dispose of them correctly.

Also, repeating your sentiment, for all the tech gadgets.. bluetooth speakers, I'm looking at you.. why not have replaceable batteries for those? There have to be enough vapers now that the knowledge of this type of battery as distinct from the old alkaline ones has passed into mainstream consciousness. This would be a huge selling feature for me.

The reasons I see are that it is because the rechargable li-ion are more dangerous and a fire hazard, but is this really true? As with most anything that can carry a risk if misused, I can find a few dozen instances where a vape battery went awry, but surely the benefits outweigh the concerns?

Edit: I do understand the irony of saying this on a post about when they do go boom.


The market for the end product (and the risk aversion of the manufacturer) makes a difference.

Flashlight and vape enthusiasts are mostly adults who likely trend as all three of: older and more knowledgeable, more likely to take and accept risks, and more willing to pay a premium for the benefits of replaceable batteries... and the companies that make vapes and high-powered enthusiast flashlights are probably less worried about a customer suing them over a battery issue than a large toy manufacturer. If you're a vape company, you have bigger safety issues to worry about -- like the normal operation of your products :)


> and few I believe bother to dispose of them correctly.

There are no mercury alkalines anymore for general consumer use, those collection bins were removed from stores in the 90's and they can be disposed of with normal waste.


I actually have a Bluetooth speaker that takes a removable 18650. It was branded "Polaris V8", but I think it's a white label product that's no longer in production. It still works, and most other ten year old Bluetooth speakers probably don't.

I'm with you on the risk/benefit calculation. E-waste is bad, and the option to bring a spare battery makes a lot of products more useful. A Li-ion cell can be dangerous if mishandled, but less so than a jug of gasoline or larger power tools.

This can be considerably mitigated by sticking a protection circuit on the end of a cell, which makes it no more dangerous than the proprietary Li-ion batteries used in things like cameras.


I didn't know many were still buying alkaline AAs in large quantities. I've been using LSD NiMH AAs and AAAs for I think more than 15 years and haven't looked back. They seem to work with everything.


There's still lots of poorly produced electronics that treat NiMH that is within its normal operating voltage as being out of juice, and either nag you or shut down completely. My supposedly high quality Logitech mouse is one example (probably not buying anything from Logitech ever again, they're one of those brands that are coasting on their old credibility).


If it runs on two batteries in series and you're willing to take a risk, you can get a 3.7V 14500 battery and then a dumb fake straight wire battery in there. Gives you 1.85V per-slot instead of the normal 1.5, which might be too much for the device, but beats the pants out of the 1.2V you get from NiMH AAs.

I got 14500s for my Logitech F710 game controllers, and then drilled a hole in the battery compartment of the controller to make them plug-in chargeable. I've only just played with them a few times - no guarantee this is a long-term solution, but it seems to work well for now.

Note that this does mean you'll have a bin of things that look like AAs but might cause a fire or melt if you put them into the wrong thing that accepts AA batteries (like the just-a-wire-fake-batteries have allcaps warnings about never ever putting them into a charger).


It actually gives you 2.1V per slot because a fully charged standard Li-ion cell is 4.2V. This is also sketchy because it will likely over-discharge the cell below 2.5V if not monitored carefully. Over-discharge makes it dangerous to charge the cell again.

Actual protected 14500s will be too long in most devices meant for AA, but it's possible to find protected 14430 cells marked as "14500" from some flashlight brands like Acebeam and Skilhunt. Those are safe with regard to over-discharge, but the voltage of a fully charged cell might still damage devices not rated for it.

I'd rate this modification as risky and only suitable for people with significant battery expertise.

Edit: saw the other comment mentioning 14500s with USB ports. These will be protected against short circuit and over-discharge, and are actually based on 14430 cells.


Ah, thanks! Good to know I dodged that bullet by blind luck. I had picked up a couple of USB-port charged version of one of those old chubby non-rechargeable lithium batteries that were used in early LED flashlights (CR2 I think) to resurrect some old steel LED flashlights I found in a drawer, and got funny fantasies about doing other devices this way.

I saw some articles and ads for doing it using 3.1V LiFePO4 batteries but I couldn't find any of those with USB charge-ports... I guess your warnings are why you're supposed to use the 3.1V Li-phosphates for that. So I went with the 3.7V LiIon because I really wanted that port.

I guess I dodged a bullet. Thanks for the warning. I actually did systems engineering as an undergrad (though I just work in software) so that makes me a bit overconfident with electronics even though I don't know jack about battery chemistry besides the basic theory. I'll be more careful on research next time I undertake this kind of project.


It seems like you might be looking for "1.5V Li-ion AA", which is a 14430 with a buck converter stuck on the end.

I have pretty much the opposite preference regarding charging: I'd much rather swap in a charged spare and stick the drained battery in a slot charger than charge batteries inside devices. There's no waiting that way.


A device that won't work with NiMH due to voltage will also only use about a third of the energy in an alkaline. Poorly produced is an understatement.

My Logitech G604 works fine on NiMH. The calibration for reporting battery charge as a percentage is off, but it runs for months.


As an aside, pretty much all the g604 is will end up with double click reliability issues or an inability to hold down the mouse button and drag. But you can easily replace the switches or there's vendors on eBay and AliExpress that I'll sell you a circuit board with the switches pre-soldered for replacement.

Logitech no longer makes the g604 :(


I have a g302 sitting in a drawer on my to-do list for similar double-clicky switches replacement.


I read that before I bought one. I have a screwdriver and a soldering iron.


If you can replace the battery, they can't sell you a new one.


Bingo!


Not parent; just throwing this out there; but in Canada, the local provinces have been doing a relatively ok job of keeping track of covid statistics. In Alberta [0] we have a fairly diverse population, and should somewhat generalize to other regions.

It would certainly appear that if we assume the risk of vaccination is relatively constant, (The risk actually appears to be greater for younger, <55 y/o people), then it makes a lot of sense to prioritize the older populations.

https://www.alberta.ca/stats/covid-19-alberta-statistics.htm... -- scroll to the bottom for age distributions.

Edit: FWIW, I don't particularly support delaying vaccination for any age group, however it should be recognized that the risk is apparently non-zero. I for one will be getting mine as soon as it is available to me.


Thanks. Agreed that vaccinating the elderly first is a reasonable strategy, though it would also be smart to hit the people that spread the most (the "hubs" in a graph of physical interactions).

I'm trying to go through the rough numbers of risk/reward: Consider the potentially affected group with the lowest COVID death rate - women 20-29 years old. If all get the AZ vaccine, ~10 per million would have a CVST , and 3 would die, using the German numbers, which are higher than other countries. [1] (double these numbers to assume the side effect only affects women, and doses have been evenly distributed. Double them again to assume that this side effect only affects <65 years old, and doses have been split between the elderly and younger medical workers [2]).

If the COVID IFR is for 20-29 year olds is 0.01% [3], that's 100 per million that would die if infected. Maybe the IFR is lower for women, and with some effective treatments (dexamethasone, early recombinant antibodies) becoming available, but maybe it will be ~50% higher with prevailing variants.

So this vaccine may kill 12 per million women but save 100 per million. But the 100 per million is an unfair comparison - presumably less than the entire population will be infected while waiting for a different vaccine. Assuming 1/4 of infections are detected, there are 1000 new infections per million in Germany each day. It would take 120 days to infect 12% of the population and kill as many with this side effect as would have been saved by the vaccine, so if it will be a delay of less than four months to switch vaccines for this group, stop giving AZ and wait for another vaccine!

Huh, I didn't really expect this result, and for older age groups, the crossover point will come much sooner. Germany has decided that it only makes sense to only for 60+ year-olds, and maybe that's the right call, but I really wish they would be transparent about the reasoning.

[1] https://www.pei.de/EN/newsroom/hp-news/2021/210319-covid-19-... [2] https://www.statista.com/statistics/1195611/coronavirus-covi... [3] https://pubmed.ncbi.nlm.nih.gov/33289900/


Back in university I learned the in and outs of RSA, and to be honest, it seemed simple, understandable, and frankly, quite solid. It absolutely depends that p and q are chosen at random, but besides that, should be basically uncrackable.

Please let me know if I am wrong (outside of a quantum computing breakthrough).

edit: I understand that p and q need to be large primes. But there are a gargantuan number of large primes. AFAIK, if 52 cards in a deck shuffled randomly outnumbers in possibilities than the number of atoms in the universe, than surely RSA beats that by many many orders of magnitude in terms of computational complexity even at 1024 key length?


The article does list several real-world attacks on:

  - Lack of padding
  - Padding oracles
  - small public exponents
  - badly chosen private exponents
  - Badly randomized large primes
  - Re-used large primes


It's the "steel door in a wooden frame" problem. The implementation is the weakest link, and not the theory.

Don't worry too much about quantum computers for now; worry about the attacks listed tfa, about the history of attacks being discovered, and the history of implementations being weak years after those attacks being discovered. And then consider that the NSA is the world's largest employer of mathematicians, who have each been toying with RSA since the very beginning of their career.


I am not cryptographer, just developer who used cryptography in the past.

I find historical argument not very convincing. RSA have been around for a long time.

Will we be reading "do not use ECC" articles in 2039 after comparable amount of research will be put into finding subtle unexpected errors in ECC?


RSA has been around a long time, yes, and Caesar cipher has been around for even longer. Part of the historical argument that you're dismissing is a very persistent Dunning-Kreuger effect: very smart software developers don't know how much there is to know about crypto, and since RSA is "simple" it's easy to delude yourself into thinking that you can do it right.

And, yes. Expect ECC to be broken. It was initially developed by Miller at the NSA, who only released that information when Koblitz discovered the cryptosystem independently. So they've been trying to crack it for a very long time, and you can be certain that they know of unpublished breaks. It's almost certain that they can break certain parameter classes, but the discrete log problem itself keeps getting weaker and weaker.

If moving away from weak crypto on a regular basis sounds like an undue burden, get out of the game, don't roll your own, leave it to the experts or you'll be doing yourself and your users a disservice.

I'm not a number theorist, but I did a lot of crypto and rubbed elbows with several NSA-employed cryptologists in my undergrad. I'm a decent developer, too, but I know far too much to think that I'm qualified to roll my own.


The main point of the article is that RSA is easy to use wrong, unlike ECC, so, no, we won't (at least for the same reason).


I suppose you could argue that weaknesses in ECC just haven't been discovered yet.

Whereas RSA is more thoroughly researched and various weaknesses revealed.

I don't know if that's true, I wrote a toy ECDSA implementation years ago (during highschool), and compared to RSA, ECC is certainly more complicated. Sure there are fewer parameters, and we currently know of fewer requirements for these parameters. But who is to say a weak class of ECC private keys won't be discovered in the future?

If you're paranoid about what weaknesses might be discovered, I suppose using RSA+ECC is a option :)


You're getting downvotes and not replies, so I'll bite. My guess is that it's because you're advancing a common crypto fallacy. Two weak cryptosystems do not combine into a strong cryptosystem. Do not wing it. Use vetted code; don't roll your own.


Speaking of NSA, libsodium looks nice, but isn't anyone a bit worried that it's still hosted on Github ?

That Microsoft is in bed with NSA is common knowledge at this point, and that the libsodium authors choose to (keep being) associate(d) with them doesn't exactly fill me with confidence…

(yes, the actual risk of NSA messing with the repository without libsodium authors' knowledge is probably very low, but still, it doesn't give the best impression… )


That's not how git works.


> Back in university I learned the in and outs of RSA, and to be honest, it seemed simple, understandable, and frankly, quite solid.

If you have a degree in computer science and not in cryptography it is quite frankly unlikely that you learned more than the basics of RSA.

> It absolutely depends that p and q are chosen at random, but besides that, should be basically uncrackable.

No. There are many things that can go wrong even if you choose p and q adequately.


See for example Dan Boneh’s “20 years of attacks on the RSA cryptosystem” [1]. Itself now written 20 years ago and the attacks keep coming.

[1]: http://www.ams.org/notices/199902/boneh.pdf


tl;dr: "The attacks discovered so far mainly illustrate the pitfalls to be avoided when implementing RSA. At the moment it appears that proper implementations can be trusted to provide security in the digital world." (My own emphasis.)


We’re still waiting to find one of those proper implementations 20 years later.


> Back in university I learned the in and outs of RSA, and to be honest, it seemed simple, understandable, and frankly, quite solid.

No offense, but typical uni knowlege of these things is woefully underprepared for understanding security subtleties, and uni-level api design in my experience is not mature and skews towards power and flexibility instead of ease and safety.


Interesting. I've got mixed feelings about this, as Echo VR (multiplayer) is definitely one of my most played VR games. If you have a VR headset, it's definitely something worth checking out.

On one hand, "boo facebook", on the other, hopefully this will give the Ready At Dawn developers a chance to deliver an excellent Lone Echo 2 experience without worrying so much about financial stress.

Again, if you have a chance, Echo Arena and Echo Combat are in my humble opinion, the absolute best VR games on the market today, rivaled only by HL: Alyx. The unique zero-g locomotion is something that needs to be lived to understand. Being able to just grab onto any surface, push yourself off in the direction you want to float, and then mix it up with thrilling ender's-game style gameplay is something so completely amazing (albeit, nauseating to some) that I'm surprised that it is not much more top tier and talked about much more.


Hi, I'm a developer at NexOptic[0] and we are a company that was deeply inspired by this paper when it was first published. We had a lot of early success when attempting to replicate the results on our own and ended up running with it, and extending it into our own product line under our ALIIS brand of AI powered solutions.

For those curious, our current approach differs in some very significant ways to the author's implementation, such as performing our denoising and enhancement on a raw bayer -> raw bayer basis with a separate pipeline for tone mapping, white-balance, and HDR enhancement. As well, we explored a fair amount of different architectures for the CNN and came to the conclusion that a heavily mixed multi-resolution layering solution produces superior results.

As other commentators have pointed out, the most interesting part of it is really coming to terms that, as war1025 pointed out, "The message has an entropy limit, but the message isn't the whole dataset." It is incredibly powerful what can be accomplished with even extraordinarily noisy information as long as one has a extremely "knowledge packed" prior.

If anyone has any questions about our research in this space, please feel free to ask.

[0] https://nexoptic.com/artificialintelligence/


It would be really cool if you could feed the network a photo with flash that it could use for gathering more information, but then recreated a photo without flash from the non-flash raw.

Often flash is not the look people are going for, but would be okay with the flash firing in order to improve the non-flash photo.


Absolutely! We recently rebranded our AI solutions from ALLIS (Advanced Low Light Imagine Solution) to ALIIS (All Light Intelligent Imaging Solution) specifically because we are beginning to branch out to handle use cases such as this!

As a proof of concept that this task can be tackled directly, a quick search brought up "DeepFlash: Turning a Flash Selfie into a Studio Portrait"[0]

Beyond denoising, we are already running experiments with very promising results on haze, lens flare, and reflection removal; super resolution; region adaptive white balancing; single exposure HDR; and a fair bit more.

One of the other cooler things we are doing is putting together a unified SDK where our algorithms and neural nets will be able to run pretty much anywhere, on any hardware, using transparent backend switching. (e.g. CPU, GPU, TPU, NPU, DSP, other accelerator ASICs, etc..)

[0] https://arxiv.org/abs/1901.04252


Before reading your reply to OP's comment I got to thinking about how the super-resolution process and flash photography might interact (https://news.ycombinator.com/item?id=22905317). I get the impression you left the point I got to a long time ago :)


DeepFlash: Turning a Flash Selfie into a Studio Portrait

https://www.youtube.com/watch?v=enLmReROhc8


The way I mistakenly initially parsed this comment gave rise to a potentially-dumb idea/question:

What would happen if you

- begin capturing video (unsure of fps) on a phone-quality sensor in a near-dark environment

- pulse the phone's flash LED(s) like you're taking a photo

- do super-resolution on the resulting video to extract a photo...

- ...while factoring in the decay in brightness/saturation in consecutive video frames produced by the flash pulse?

I vaguely recall reading somewhere that oversaturated photos have more signal in them and are easier to fix than undersaturated. Hmm.

IIRC super-resolution worked with 30fps source video for better quality; I wonder if 60fps or 120fps source video would produce better brightness decay data, or whether super-resolution could actually help extract more signal out of the decay sequence too.

On the other hand, I'm not sure if super-resolution fundamentally requires largely consistent brightness in order to work as well as it does. :/

Perhaps individual networks could be trained/tuned to specific slices/windows of the brightness gradient. I also wonder if it would be useful to factor the superresolution process into each of the brightness-specific stages or just to do it at the end.


For the most part, our effort has been focused on single exposure image enhancement, however we are beginning to use recurrent models to improve quality when video information is available.

Nonetheless, it's kinda a neat idea, so I tried testing the feasibility of it. I set up a recent flagship phone that claims to have 960fps super-slow-motion video capture next to another phone with a strobe app at 12Hz with a short delay in between pulses.

https://www.dropbox.com/s/ha51ntucl3klkcb/cell_flash_960fps....

There are definitely a few frames where the LED is at an intermediate brightness, however teasing out the exact timings between the flash and the camera may prove to be difficult to correctly synchronize.

As for over-saturated images having more signal... although the PSNR calculation may give you a better number, in practice, a region that is over-saturated is just a blob of 1s on the image (assuming float64 pixel values of 0-1) and there is no information there to extract. With a black level near but not at 0, we've found there is often more information hidden in the 'dark noise' than can be discerned by the human eye alone.


Wow, cool, you actually tested it! And an effective test too.

Stepping back and forth throughout the frames (using mpv), the flash clearly enhances several spots of localized brightness where contrast pops out into clear relief.

The effect is clearest at the very bottom of the image which goes from "shadow blob" to "adequately discernible", but I think the area just above that (the 3rd vertical quarter of the image) is most interesting; the detail visible in frames 24-29 (immediately before 00:00:01 / 30.030fps) is excellent, and that's with the flash LED at peak brightness.

Flash synchronization would be effectively impossible to achieve (the camera would need to stream LED status information inside each frame), but achieving such synchronization may provide no net gain, even with "LED is on" information available, both because the exact point the hardware says "LED is off" will not necessarily correspond to the exact moment in time the light decays to zero (based on 1/960 = 1.0416 milliseconds per frame, the video suggests it takes apparently 2 frames or ~2.08 milliseconds for the light to decay), which will never be the same as the flash sends light outwards into arbitrarily different environments. I can't help but wonder if calibration references for everything from Vantablack to mirrors would be needed... for each camera sensor... and that there would then be the problem of figuring out which reference(s?) to select.

Staring at the video frames some more, two ideas come to mind: 1), analyzing all the frames to identify areas of significant difference in brightness, then 2), for each (perhaps nonrectangular) region of difference, figuring out the "best" source reference for that specific region. As an example reference, I'd generally use frame 13 for most of the image, and frame 44 or so (out of many, many possible candidates) for the bits that, as you say, become float64 1.00 :). Obviously a nontrivial amount of normalization would then be needed.

I'm not aware of how you'd do either of these neurally :) but the idea for (1) came from https://en.wikipedia.org/wiki/Seam_carving (although just basic edge detection may be more correct for this scenario), while the idea for (2) came from https://github.com/google/butteraugli which "estimates the psychovisual similarity of two images"; perhaps there's something out there that can identify "best contrast"? I'm not sure.

Trivial aside: I wondered why mpv kept saying "Inserting rotation filter." and also why the frame numbers appeared sideways. Then I realized the video has rotation metadata in it, presumably so the device doesn't need to do landscape-to-portrait frame buffering at 960fps (heh). I then realized the left-to-right rolling shutter effect I was seeing was actually a bottom-to-top rolling shutter. I... think that's unusual? I'm curious - after Googling then reading (or, more accurately, digging signal out of) https://www.androidauthority.com/real-960fps-super-slow-moti... - was the device an Xperia 1?

(And just to write it down for future reference: --vf 'drawtext=fontcolor=white:fontsize=100:text="%{n}"' adds frame numbers to mpv. Yay.)


Sounds like you have taken this pretty far, do you have any example outputs? The only one I found via your website was a PDF with a low res image with no context.


Sure, we have a short deck[0] that gives an intro to our noise reduction, and also here is a folder[1] that shows off a calibration target we captured with a actual camera (20ms, f22) in low-light conditions: (original, 100x gain, 100x gain + ALIIS)

We also have some more raw data[2] where there is the original bayer data available as .npy files with 40db analog gain applied, however I think the calibration targets show off what we are able to accomplish more dramatically. Finally, we have a short youtube video[3] that shows off how it works when applied to video.

[0] https://www.dropbox.com/s/0bm4dpxhn35vkhe/ALLIS_Investor_Int...

[1] https://www.dropbox.com/sh/k861saentyq1cs6/AADmO7X_L49nUkEI_...

[2] https://www.dropbox.com/sh/fv8omdf4fbx59m9/AABDnf6sdvv7rtIml...

[3] https://youtu.be/99Cq1bWCmMM


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: