Hacker Newsnew | past | comments | ask | show | jobs | submit | kcole16's commentslogin

Technically, Catalonia is still a part of Spain, but I appreciate you recognizing their sovereignty!


This is referring to StackOverflow's Talent product, which is a software engineering focused hiring platform. Seems like they furloughed the sales and marketing team for this product, due to an expected slowdown in market-wide hiring.


Ohh I misunderstood, I thought that's what they call their recruiting team


Yes, this is what happened. Though only a portion of the team related to Talent was furloughed, not the whole team. It is still an active product, but sales has taken a big hit related to the downturn of the overall hiring environment due to Covid19.


This is due to how cryptocurrencies are classified by the IRS. Currently, they are classed as assets (property) as opposed to currencies, and thus any "realized gains" are taxable events. This includes the above example, where you may have technically "realized a gain" while buying a grilled cheese with Bitcoin.

https://www.irs.gov/businesses/small-businesses-self-employe...


I've found that Instagram use is growing amongst nearly all of my friend groups (including HN demo and non), while use of all other social apps is fading (not counting WhatsApp). For reference, mostly age 24-32 in the US and Western Europe.


It has the benefit of posting something simple and easy, a picture. And those pictures rarely convey opinions, so there are less wars on instagram.


Same! Python requests is one of my favorite libraries of all time. Kenneth Reitz is a treasure.


Fantastic idea. Pour overs are the fastest decent brewing method, and usually the cheapest. Making it portable is really cool little innovation. If the coffee is good, I could see this really selling


In Portugal, very few places accept credit cards, or even debit cards in many cases. This is mainly so the businesses can avoid the crazy fees. It's inconvenient, but the prices are probably slightly cheaper, so people don't really complain.


It's also used by those business as a way to avoid paying the full taxes owed by underdeclaring profits. Notice the trend of high cash countries and low tax revenue/struggling underclass. Spain, Portugal, Eastern Europe, Greece.


Restaurants in Germany as well. When you ask them why they don't accept cards, they usually say something like "I don't like government surveillance" and you know whats up.

Some countries in the EU started requiring cash registers and receipts and some merchants were able to make some quite good revenue improvements in the month they were introduced :)


Portugal is in the EU so how come CC fees can exceed the EU maximum which is a fraction of a percent? (and likely less than the cost of cash handling for all but the smallest merchants).


I suspect you're misunderstanding that that limit applies to. Until you are taking substantial revenues as a merchant, you are surely going to be paying more than a small fraction of a percent in credit card processing fees one way or another.


Small merchants probably pay north of 2-3% to some intermediary provider (I hope a future regulation will cap this as well). I still see almost no merchants that don't take credit cards here though - likely because when card use reaches a critical mass you 1) lose too much business if you don't accept cards (I can go 6 months without touching cash) and 2) The amount of cash shrinks to a point where there marginal cost of the next cash transaction is also pretty high, e.g if you do 1000 card transactions and only 10 cash transactions in a business day then each cash transaction has to carry 1/10th of the cost of cash handling for the day.


Nailed it. Traveling is amazing, but it certainly loses its luster after a while. I must say we're extremely fortunate to have been able to even make it to this point though.


What exactly is the benefit of Elsevier for researchers? Is there no free way to publish papers?


Elsevier own some really high profile journals, including The Lancet and Cell. Refusing to publish in these journals, or similar ones in terms of impact factor, is simply not an option for junior faculty in most departments. I have friends for whom "publishing in Nature, Science and/or Cell" is literally a precondition for getting tenure.


Why does academia put up with this stupid system? Academics are already the ones doing all the peer review on their own, free of payment, right? So why not say "Hey, we'll set up our own servers, host papers free for all to access, and then note which ones pass the same levels of scrutiny we'd apply as reviewers were we reviewing them for Nature, Science, Cell, The Lancet, whatever"?

Doesn't seem like the journals actually provide much beyond asking professors to do stuff for free, both in terms of writing papers and reviewing them. Is inertia really worth letting them leech so much money and lock down access (antithetical to the whole idea of public-ation...) while they're at it?


It reminds me of the electoral college system: people complain about it and realize it is outmoded system but it will never get replaced because the people with the power to actually change it won't since the system got them the power in the first place. Academic publishing isn't quite that bad since it is slowly changing but I think it is similar.

Established faculty largely don't care because they are established (there are exceptions of course). Some new faculty would like to change it but if they do set up their own system it won't be taken seriously by the administration or senior faculty in charge of tenure review. There are also new faculty who don't know or care about these things so they just do what they know which is traditional publishing.

I was at a faculty development session a couple months ago. There were 20 faculty members there from various departments and the topic turned to open access journals. Some people were arguing that they weren't worth publishing in because tenure review boards haven't heard of them so they don't take them seriously. Then one guy - remember, this is a college professor - asked "where are these papers stored?" He wanted to know where the actual servers were physically located. And then he said, "This whole online thing seems like Big Brother."

That being said, things are changing and in some fields open access journals are seen as reputable and accepted but they are still new and in some fields (non-stem mostly) they are seen much more skeptically. So if you are in those fields and you want tenure you are going to try to get published in the old journals first.


The Electoral College still offers a modicum of protection against the tyranny of the masses. In a pure popular vote system, presidential candidates wouldn't have to educate themselves enough to pretend to care about the lives outside the US's 10 largest cities.


Because most of leading academics were happy with the system and don't care about open access. This is slowly changing as the Internet generation takes over.


Is tenure still as widely available as it once was?

But yes, entrenched interests hold everyone back eventually. They are currently being disrupted, and this time they lost the battle completely - there is not a single thing they can do to effectively keep their entrenched interests from crumbling.


React Native doesn't deliver a poor user experience though. It's not the right choice for every project, but in most cases users won't be able to discern between RN and native.


On iOS maybe, but not on Android. You have to reimplement a lot of the things you get for free when developing native apps yourself. Most animations etc, you all have to add it yourself. Problem with that is that the React Native team has some decent iOS stuff, but for Android it's all sorely lacking.

Besides Facebook's Ad Manager (which barely cuts it) I haven't seen any React Native app on Android that didn't feel horrible (and not like an Android app).


Does the Facebook app use React "Native'? Because that app is a horrible power hog and vastly glitchy and buggy. That is a poor user experience compared to what a well written native iOS app could deliver. Users might not know its a poor experience because that can't compare to how fast and responsive the app would be if it were actually written in Swift.

There are people in this world that think Olive Garden is great Italian food. There are people in this world that think React "Native' is a good user experience.

Why does Facebook absolutely insist on avoiding writing actual native apps? As in Swift for example. It seems like they are almost allergic to actual native and instead focus on this inferior cross platform stuff.

In 'most cases' users won't know the difference? sure they will; they'll notice that the app they're using sucks more than other apps on their phone. They'll tolerate the glichiness, the occasional blank data screen during a load because they care about the content more than the terrible experience. All because a developer or the CTO somehow thinks "it works well enough" is the same as "let's really give our users the best possible experience." I am amazed that Facebook has thousands of employees but can't be bothered to write a single line of code in Swift. They've been insisting on this stubborn course of action since the beginning of their mobile experience. Does Zuckerberg just hate Obj C or Swift? Why are they making the mobile experience into the lowest common denominator? Why does their Facebook app feel like some cheap PhoneGap experiment? My Facebook app on iOS performs exactly the same as it does on a years-old Android phone.. And that's ridiculous. I have superior hardware and yet I get to run an inferior app because JavaScript? It's like socialism for apps: make everyone equally miserable.

The simple fact is this: I hate cross platform systems because they end up averaging the quantity of the mobile experience with capabilities being reduced to support the lowest common denominator. If I want my apps to run as terrible as many do on Android, I will use an Android. It's lazy development. It's a means for the JavaScript crowd to avoid leaning Swift (or Java) so they can provide middling to bad mobile apps rather than actually building the absolute highest quality product they could build.

Even Facebook does it! I feel like the React Native ecosystem is doing more to reduce the quality of the mobile app experience than anything else. Apps are being turned into these average pieces of crap with only the UI being slightly different. Does anyone have any performance benchmarks on a React "Native" compared to Swift? Any data at all? Or are we just so excited to write apps in React that we fail to care? If we care about 'cross platform' development -- we can already do that; it's called 'the web.' Let stop foisting inferior mobile apps on people just because we can.


The Facebook app doesn't use React Native[1]. It used to be HTML5 but they rewrote it mostly native[2]

[1] https://facebook.github.io/react-native/showcase.html [2] https://www.facebook.com/notes/facebook-engineering/under-th...


There are definitely parts of the Facebook app built in React Native. I built some of them. :)


Interesting - is there an intention to rewrite the whole thing gradually and move away from native? Any reason it's not listed on the React Native site?


I left Facebook almost a year ago so I can't speak about Today.

But back then internal adoption of React Native was definitely accelerating rapidly and it was solving very real organizational and developer experience problems. The original motivation of the project internally was to solve developer experience pains, just like React but for mobile.

- With such a large app (Facebook), the compile cycle was becoming quite slow. RN has no compile cycle.

- You've got 3 teams (web, iOS, Android) per product (eg. Events, Groups, etc) and they don't really communicate or share any product code despite building effectively the same thing.

Take the Mobile Ads Manager app: Now one team (of web engineers) can ship an app on iOS and Android, while sharing 83% of the code between each app, in half the time the project had budgeted for just the iOS pure native app. Not to mention the team constantly loved their jobs because they didn't have to wait 5 minutes for the damn thing to compile every time they made a change.


The Facebook app for iOS is actually mostly Objective-C [1] / Objective-C++. [2]

[1] http://quellish.tumblr.com/post/126712999812/how-on-earth-th... [2] http://componentkit.org/


JavaScript is faster in most cases than Objective-C.

(I'm not going to dispute that cross-platform toolkits can have less native fidelity than compared to coding to the native toolkit can. I just don't like the Objective-C vs. JavaScript performance myth.)


Huh? What's the Objective-C vs. JavaScript performance myth?

Here's a link showing Objective-C beating JavaScript handily: https://medium.com/@harrycheung/mobile-app-performance-redux...

But is this even a debate? Wouldn't you expect a compiled, manual memory managed, language to be faster than an interpreted language with garbage collection?


That's an interesting benchmark, and I'd need to dive into the details to see what is going on. Perhaps there is some sort of JIT slow path. I would not expect method-heavy Objective-C to beat JavaScript. In general:

> But is this even a debate? Wouldn't you expect a compiled, manual memory managed, language to be faster than an interpreted language with garbage collection?

Objective-C is not compiled in terms of method dispatch, nor it is manually memory managed. Instead, all method dispatch happens through essentially interned string lookup at runtime, backed by a cache. Objective-C also has a slow garbage collector--atomic reference counting for all objects. (Hans Boehm has some well-known numbers showing how slow this is compared to any tracing GC, much less a good generational tracing GC like all non-Safari browsers have.)

The method lookup issue has massive consequences for optimization. Because JavaScript has a JIT, polymorphic inline caching is feasible, whereas in Objective-C it is not. It's been well known in Smalltalk research since the '80s that inline caching is essentially the only way to make dynamic method lookup acceptably fast. Moreover, JavaScript has the advantage of speculative optimization: when a particular method target has been observed, the JIT can perform speculative inlining and recompile the function. Inlining is key to all sorts of optimizations, because it converts intraprocedural optimizations to interprocedural optimizations. It can easily make 2x-10x of a difference or more in performance. This route is completely closed off to Objective-C (unless the programmer manually does imp caching or whatnot), because the compiler cannot see through method lookups.

Apple engineers know this, which is why Swift backed off to a more C++-like model for vtable dispatch and has aggressive devirtualization optimizations built on top of this model implemented in swiftc. This effort effectively makes iOS's native language catch up to what JavaScript JITs can already do through speculation.


Thanks for the detailed response, to summarize it sounds like your position is that:

1. Objective-C's compile time memory management is actually slower than JavaScript's garbage collection.

2. The performance consequences of Objective-C message sending are greater than JavaScript's JIT compilation. And furthermore, that JIT compilation is actually an advantage due to the other optimization techniques it enables.

I'd like to see a more direct comparison with benchmarks, but I can see where you're coming from.


Right. Note that this advantage pretty much goes away with Swift. Swift is very smartly designed to fix the exact problems that Apple was hitting with Objective-C performance.

I realized another issue, too: I don't think it's possible to perform scalar replacement of aggregates on Objective-C objects at all, whereas JavaScript engines are now starting to be able to escape analyze and SROA JS values. SROA is another critical optimization because it converts memory into SSA values, where instcombine and other optimizations can work on them. Again, Swift fixes this with SIL-level SROA.


Actually JavaScript is compiled JIT by most engines these days. While it's not the fastest, it certainly can be quite fast.


I see, good point, but then I'd expect the cost of JIT compilation would still have a performance cost? As opposed to Objective-C being compiled before distribution to the client?


JIT compilation cost is minimal in practice due to tiered JITs.


Source please, because unless you have anything to back this up, your claim can only be considered grade-A FUD.

Sometimes I feel like using JavaScript too much transports people to some kind of imaginary JavaScript fairy land full of rainbows and unicorns. I read the craziest things about JavaScript development even though I can't comprehend why so many people believe it has any redeeming positive qualities over other languages, besides ubiquity.


See my reply to the sibling comment for the explanation. There haven't been enough cross-language benchmarks here to say definitively, but as a compiler developer I can tell you the method lookup issue is really fundamental and in fact is most of the reason for Swift's (eventual) performance advantage over Objective-C.


I've read your explanation, but I'm not convinced they support your assumptions (which, barring any benchmarks that make them factual, is what I consider them to be).

I'm aware of the dynamic dispatch overhead of Objective-C, but first of all it's my understanding that Apple's Objective-C runtime & compiler perform all kinds of smart tricks to reduce the overhead to a minimum (caching selector lookups and such), and second, because Objective-C does not require you to use dynamic dispatch if performance is a concern. No one is preventing you from doing plain-old C-style functions for performance criticial sections.

I also don't buy the 'ARC is slower than GC' argument. ARC reference counting on 64 bit iOS, as implemented using tagged pointers, has almost zero overhead for typical workloads. Only if you would write some kind of computational kernel that operates on NSValue or whatever (which is a dumb idea in any scenario, about as dumb as writing such a thing in JavaScript) you would ever even see the difference between not having any memory management at all. Just like your other performance claims: without data, there is nothing that backs up your statement that ARC is slower than GC for typical workloads. Hans Bohm is not the most objective source for such benchmarks by the way.

Apart from that you seem to spend an awful lot of effort explaining the things that would make Objective-C slower than JIT'ed JavaScript, while completely disregarding the overhead all this JIT'ting, dynamic typing, etc has, and the fact that in JavaScript you basically have no way to optimize your code for cache friendliness or whatnot.

You may be a compiler developer, but based on your comments I'm highly doubtful you are aware of how much optimization already went into Apple's compilers, which greatly reduce the overhead of dynamic dispatch and ARC.


> second, because Objective-C does not require you to use dynamic dispatch if performance is a concern. No one is preventing you from doing plain-old C-style functions for performance criticial sections.

That's just writing C, not Objective-C. But if we're going to go there, then neither does JavaScript. You can even use a C compiler if you like (Web Assembly/asm.js).

> ARC reference counting on 64 bit iOS, as implemented using tagged pointers, has almost zero overhead for typical workloads.

No, it doesn't. I can guarantee it. The throughput is small in relative terms, I'm sure, but the throughput would be smaller if Objective-C used a good generational tracing GC.

(This is not to say Apple is wrong to use reference counting. Latency matters too.)

> Just like your other performance claims: without data, there is nothing that backs up your statement that ARC is slower than GC for typical workloads. Hans Boehm is not the most objective source for such benchmarks by the way.

There's not much I can say if you're determined to discount real numbers solely because Hans Boehm provided them. But here you go ("ARC" is "thread safe"): http://www.hboehm.info/gc/nonmoving/html/slide_11.html

Anyway, just to name one of the most famous of dozens of academic papers, here's "Down for the Count" backing up these claims: https://users.cecs.anu.edu.au/~steveb/downloads/pdf/rc-ismm-... Figure 9: "Our optimized reference counting very closely matches mark-sweep, while standard reference counting performs 30% worse." (Apple's reference counting does none of the optimizations in "Down for the Count".)

Here's the Ulterior Reference Counting paper, showing in table 3 that reference counting loses to mark and sweep in total time: http://www.cs.utexas.edu/users/mckinley/papers/urc-oopsla-20...

memorymanagement.org (an excellent resource for this stuff, by the way) says "Reference counting is often used because it can be implemented without any support from the language or compiler...However, it would normally be more efficient to use a tracing garbage collector instead." http://www.memorymanagement.org/glossary/r.html#term-referen...

This has been measured again and again and reference counting always loses in throughput.

> You may be a compiler developer, but based on your comments I'm highly doubtful you are aware of how much optimization already went into Apple's compilers, which greatly reduce the overhead of dynamic dispatch and ARC.

I've worked with Apple's compiler technology (clang and LLVM) for years. The method caching in Objective-C is implemented with a hash table lookup. It's like 10 instructions [1] compared to 1 or 2 in C++, which is a 3x-4x difference right there. But the real problem isn't the method caching: it's the lack of devirtualization and inlining, which allows all sorts of other optimizations to kick in.

Apple did things differently in Swift for a reason, you know.

[1]: http://sealiesoftware.com/msg/x86-mavericks.html


That's again a lot of information showing why Objective-C is not the most efficient language possible for all use cases, but it still does not provide any evidence why JavaScript would be faster. I'm not disputing the individual points you made, but in the context of comparing overall performance of Objective-C vs. JavaScript it doesn't say much at all. It mostly shows Objective-C will always be slower than straight C/C++, nothing about JavaScript performance. I appreciate the thorough reply though.

One thing I do want to make a last comment about is your dismissal of using C/C++ inside Objective-C programs as some kind of bait-and-switch argument. Using C/C++ for performance critical sections is IMO not the same as calling out to native code from something like JavaScript, or writing asm.js or whatever other crutch you could use to escape the performance limitations of a language. As a superset of C, mixing C/C++ with Objective-C is so ingrained in the language you have to consider it a language feature, not a 'breakout feature'. Nobody who cares about performance writes tight loops using NSValue or NSArray, or dispatches millions of messages from code on the critical path of the applications performance (which usually covers less than 10% of your codebase). As an example, I'm currently writing particle systems in Objective-C, but it wouldn't even cross my mind to use anything but straight-C arrays and for loops that operate directly on the data to store and manipulate particles. This is nothing like 'escaping from Objective-C', as all of this is still architecturally embedded transparently inside the rest of the Objective-C code, just using different data types (float * instead of NSArray) and calling conventions (direct data access instead of encapsulation). It's more like using hand-tuned data structures vs STL in C++, than like calling native code or writing ASM.js from Javascript.


No, it's not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: