Hacker Newsnew | past | comments | ask | show | jobs | submit | dhairya's commentslogin

It's in reference to the Physics prize going to Hinton and Hopfield for "for foundational discoveries and inventions that enable machine learning with artificial neural networks." and the Chemistry prize to Google DeepMind's founder Demis Hassabis, alongside with John Jumper (Google Deepmind) and David Baker for Alpha fold. Both prizes were given to significant figures in the AI space or use of AI applications.


Cambridge and Boston are fantastic for spring migration birding and fall migration birding, especially Mount Auburn Cemetery in Cambridge. There's a resident barred owl and red tail that swoops in quite close when walking about. https://ebird.org/hotspot/L207391

If you are taking a day hike in the Blue Hills, take main road before the hill summit to right for about a mile to the trailside museum. I discovered by accident and it's super cool as it has a public wildlife sanctuary attached with foxes, otters, eagles, snowy owls and other cool animals.


I like to joke I have type B personality with type A ambitions which is the recipe for perpetual unhappiness. When the switch is on, things move quickly, but its been getting harder and harder to find the switch. I'm in early soon to be mid-30s and I've been trying to figure out if having grand ambitions is still necessary to overcome inertia or is it something else.


The challenge for the engineers at our AI startup is that deterministic testing paradigms don't adapt well to probabilistic models that are continually being retrained. As a scientist is hard to convey the acceptable range of variance and often the random change of individual predictions at the decision boundary. It's also hard debug behavioral issues that actually are systematic model failure versus those that are traditional infrastructure bugs. Often times the band aid is that build lookup tables to ensure certain behavior which in turn also underlying issues from being discovered.

Testing paradigms are either too high level or too specific. Recent work on evolving behavioral tests addresses this but it requires more manual effort and interpretation which kinda defeats to point of automated tests.


Good discussion. I have often wondered about that interface.

This bears more resemblance to traditional manufacturing actually. I think there may be some value in borrowing ideas from statistical process control, rather than trying to force predictions into deterministic cases.


I disagree. We are conflating being smart really with being self-motivated. Also being in the right environment is very important. I do believe that anyone can do almost anything (I'm not going to be an NBA player in my 40s). Certain things though become harder as time goes on given education or industry requirements but are still not impossible - you have people become medical doctors in their 50s. It fair to state that the privilege of time and money also make career transitions far more easier for some folks. But if you are willing to put in the time and effort and stick with it, you can learn anything and make the jump career wise.

If you want to break into a technical field from a non-technical background, the better indicator of success will be grit, perseverance, and self motivation. Learning becomes easier if you are motivated to learn and when its hard still stick with it. I used mentor at a nonprofit web-dev bootcamp that aimed to help students from under-estimated and non-traditional backgrounds (no college education) become software developers. Most of the students did not have traditional STEM backgrounds and were learning to program for the first time. The program was free and deliberately designed to be hard with multiple places where students would be kicked out if they didn't keep up with the work. There were no traditional tests and coding exams. All assignments were project based with a clear deliverables (website, backend database, full stack javascript applications, etc).

Most of the students (over 80% graduation rate and 99% employment rate) who finished the program got well paying dev jobs (avg salary of 90k). Of the students I mentored, the ones who were most successful were the one willing to put in the extra hours to learn and ask for help (often doing 80-100 hours weeks of learning) and genuinely curious to learn outside of the scope of the curriculum. At the end of the day the program was not filtering on general "intelligence" (whatever that means) but really the perseverance of students to put in the work and produce something each week. At the end of 8 weeks


A couple things that come to mind with this article. My own journey has been quite nonlinear both in terms of roles (business systems analyst -> data analyst -> technical project manager -> data scientist -> AI research scientist) and environments (F100 -> academia -> startups). My undergrad (creative writing and social sciences) would not have predicted my current role (Senior AI researcher focusing on deep learning and NLP) and I still have no idea where I want to end up.

It can be hard to imagine and project your potential. Often our journeys are not linear and we have hard time factoring who we will be in future as sum of our experiences. Often that growth in knowledge and life experiences will be exponential even though to us it may feel linear in the present.

I also find it useful to think about problems instead roles. I've had roles that didn't exist 10 years ago and likewise new problem spaces are always emerging. Problems don't necessarily have to be domain specific or role specific but generally describe the types of challenges you find interesting. Once I identify a problem space I start to think about how I would like to make an impact and how I can currently make an impact. Sometimes the two are the same and other times they are different and require a journey to get there.

But I find the metaphor of problems interesting because it helps align the type of work I do with the things I find interesting at any given point. It also helps narrow the search space for opportunities and ensure what type of career growth is meaningful for you.


It sounds like you have had an amazing adventure so far, and it's really inspiring to see that you've been able to have such a fluid career. Could I contact you to learn more about your adventures? My email is Anthony at yesrobo dot net


Happy to chat.my contact info is on my profile.


This article is predicated on the assumption that immediately going to college after graduating high school is important. The pandemic skews the decision calculus greatly for many students ranging from safety to finding short-term employment enough for their current situation.

If college is important factor in improving economic outcomes, it shouldn't matter if you go to college at 18 or take a few years go at the age of 21 or even later in life. We have this stigma around adults who get a college degree later in life. I've met a several people who went to college as older adults (one at the age of 26 and the other at the age of 30) and ended up having highly lucrative careers. My mom got her masters at the age 55 (and rightfully lorded over my sister and I that if she get her degree with straight A while holding down a job, being a mom and in her 50s, then we have no excuses).

I believe college is valuable (though greatly overpriced in the US) but you don't need to be a young adult to attend. In terms of the labor effect of having fewer college graduates available for the labor market, honestly most jobs don't really require a college degree (including office and white collar jobs). Employers tend to use college degrees as cheap filtering signal instead having better hiring processes. Most entry level jobs have onboarding and training where college knowledge is not a perquisite for success.


thanks for posting the a non-paywall version


Posh Tech| Multiple Full-Time Roles | Remote with hope of onsite when conditions are safe | Boston, MA

About Us: Posh is a Conversational AI company creating the most natural and enjoyable customer experiences for financial institutions through intelligent chatbots and conversational phone bots. After spinning out of MIT’s AI Lab, Posh has grown to a 25-person team with over 30 customers. We’re just about to enter high growth and eager to bring on new “Poshies” who are excited to scale with us!

Open Roles:

- Sr. Solutions Engineer

- Security & Compliance Lead

- Sr. Account Executive

- NLP Engineer / Scientist

- UX Designer

- Software Engineer

- Sr. Software Engineer

Application Details: https://angel.co/company/posh-technologies-inc/jobs

Feel free to reach out to me if you have question (details in profile)


This article makes no sense to me. If the premise was will Google AI Ethics research implode, then yes. Removing key memembers of the team has affected the team morale and created a toxic environment for the remaining researchers. However, the post seems to imply that stifling the Stochastic Parrots paper is somehow proof that Google AI stifles innovative research that is critical to AI progress. This is a quite a weak claim even before looking at logic of the claims that get to that point.

Google AI, Google Brain, and Deepmind are all different groups at Google with different mandates and research goals. While what's happening in the Ethical AI team is troubling, it's rather a large and unfounded leap that it'll affect research productivity for the other teams.

Digging deeper the article is confusing and sometimes plain wrong on its assessment of AI research. Broadly the deep learning and RL approach to AI has been critiqued for its lack of semantic and symbolic understanding. These are not Google specific and the articles examples of this are terrible.

The first example of limitations of AlphaZero on Montezuma's revenge is a bad example. The author implies that RL failed because it didn't understand ladders. But later approaches still solved this using by using stochastic exploration strategies and not introducing conceptual knowledge to the model as the article implies is the key limitation.

On the language modelling, its weird the article cites GPT3 as problematic given that GPT3 was developed at OpenAI's research and not Google. Also GPT3 is pretrained using next word prediction which only consider left context and is far more limited that BERT which considers bi-directional context and produces richer word level embeddings. That being said the Stochastic Parrots paper does specially critique BERT.

But it's not a new critique. Emily Bender, the other major co-author, is a computational linguist who has always been critical of deep learning approaches to NLP. Bender along with Gary Marcus and many others have called for AI that considers symbolic and linguistic knowledge and have been critical of purely data-driven deep learning approaches. Stochastic Parrots is not new in its critique of large language models, it just provides newer evidence specific to the current state of language model research.

So I'm not sure how any of this is a signal that Google AI is imploding. The broader trends in AI is not just throw more compute and make bigger models. It just happens that large models works well for OpenAI and Google for specific problems. Google also has one of the largest knowledge graphs and there is open line of research that combines symbolic knowledge from KG with deep learning methods. There is also active research both at Google and elsewhere that aims to add more make current deep learning approach more "intelligent" by using linguistic and symbolic knowledge.

I'm confused as to how Google AI research is imploding. Google PR attempting to censor Stochastic Parrots (which was still published) because of bad PR optics has nothing to do with active research questions elsewhere at Google Brain, Deepmind, and Google AI.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: