Hacker Newsnew | past | comments | ask | show | jobs | submit | askthereception's commentslogin

The first question on the first problem sheet reads:

> Describe one aspect of memory that you are absolutely certain is true. Explain how you know– be as specific as possible.

Any thoughts?


Memory is subconscious.

You may recall things without consciously thinking about them, and you may fail to recall things when specifically thinking about them. Conscious thinking is good at logically synthesizing information as it's needed, but some property of it forbids it from actually storing any information. Real world data just doesn't organize neatly; The subconscious is fine being a mess.

Try this experiment: Ask your subconscious to recall a specific memory you know you have. Then, stop thinking about it entirely. Within an unspecified length of time (could be minutes, hours, days), you'll spontaneously recall details about that memory you didn't know you had.


Memory is fallible. I know because I remember my memory failing me. If that meta-memory is accurate, then it is evidence that my memory is fallible. If it is inaccurate, then I remember something that didn't happen, so my memory is fallible anyway.


Are you sure you remember when your memory is faillible? ;-) (Mine is perfect, I don't remember I ever forgot something.)


Memory is associative. I know this because someone told me a long time ago and then I've seen many studies which has shown that. I am sure some confirmation bias was at play. I specifically recall two types of studies. First one is that giving a hint of any specific part of memory aids in recalling of a whole memory. People have used this to construct memorization techniques, such as memory palace. Second one is that memory created during an emotional state is easier to remember when emotional state is recreated.

Additionally I have not seen any studies which disprove memory associativity claim, although I am sure there are some and I just have not looked hard enough.


For those who are getting confused about the S and K combinators, another definition is implied by the notion of a 'partial combinatory algebra' [1]. It is a set A together with a partial map A x A -> A, viewed as 'application'. You then require the S and K combinators to exist somewhere in A (they are non-unique) and you can prove things like the recursion theorem.

[1] https://ncatlab.org/nlab/show/partial+combinatory+algebra


It's more helpful to view category theory as a language.

Most undergraduates learn mathematics in the language of sets. If they're doing algebra, then the underlying structures of their groups, rings, etc. are always sets. Same for topology and logic (e.g. classical model theory). An extreme example of this set-theoretic thinking is defining the real numbers in terms of Dedekind cuts.

Category theory, very slowly, changes this perspective. If you study this for years, you will come to realise there is more to mathematics than sets. And your way of thinking will shift. You will learn to ask different questions. For example, instead of defining things in terms of their underlying set (often breaking symmetries), you will ask 'does it have a universal property?' and so on. Note, there are still tons of reasons to use sets / ZFC, even when working with categories mostly. But I do not want to get into that and it's irrelevant for the point I'm trying to make.

Final note: the 'language of sets', i.e. the language of undergraduate mathematics, is very different from set theory. 'Set theory' is like the language talking about itself. The same goes for pure category theory.


Set theory was developed to make analysis “rigorous” by the standards of 19th century mathematicians, who were probing the edge cases of the more intuitive (or maybe handwavey) assumptions from the 17th–18th century. Here’s Rudin to explain: https://www.youtube.com/watch?v=hBcWRZMP6xs

In the 20th century it became trendy to (a) try to base everything on set theory, (b) write mathematics in a very dry and formal style, taking inspiration from the Bourbaki project.

My impression is that undergraduates are taught in the language of sets partly because it is serviceable for describing the subjects they are trying to learn (esp. analysis) but also more importantly because mid-20th century mathematicians who set up the curriculum we are still using wanted to thoroughly teach (indoctrinate?) undergraduates the trendy style of the time.

Personally I think it has been a mixed bag; the style is a big turn-off to many people, and ends up chasing people out of mathematics who could otherwise make valuable contributions. The people who remain seem to mostly like it okay though. YMMV.


I have some problems with this comment.

First, why the scare quotes around "rigorous"? Set theory literally did make the foundations of analysis more rigorous.

Second of all, we don't use set theory because it's "trendy" or because we want to "indoctrinate" undergraduates, we use it because it's the best known foundation for mathematics. If there was something less complicated or annoying that worked, we'd use that instead. But as far as we know, there isn't, and there are good reasons to believe we won't find something better.


Your observations strike a serious chord with me, because the first class I took where a professor wielded set theory like a ruler to the back of students' hands was exactly the point I realized that mathematics was not my calling.

But I've come to understand that I was not realizing that I didn't like mathematics or thought it was too hard, but instead that if I made that my life's work, I'd be working with many more people like that, rather than the people I wanted to - bright and creative, yet kind and humble.

I don't think set theory or any other set of tools is the problem behind potential academics getting turned off. It's the people, and the culture of glorified monkhood.


Note that "sets" and ZFC are not the same thing - ZFC is simply one set theory among many. In fact, structural set theories like ETCS ("Elementary Theory of the Category of Sets") or SEAR ("Sets, Elements And Relations") are even more cleanly suited to typical undergrad mathematics than ZF(C), while also being easier to characterize categorically.



Very good point. To add on this, even category theory is often taught in the language of sets and classes (“A category is a class of objects with a set Hom(X,Y) of morphisms for each pair of objects, etc.”)

It is possible to use categories as basic building blocks instead if sets but, in my anecdotal experience, this us not what the majority of graduate programs in Mathematics do.

It will be interesting to see whether this will change in the next 20 yea.


They don't do it for good reason: set theory is basically a strictly better foundation for mathematics, and you have to ape all the set theoretic constructions when doing various things anyway (e.g. constructing the real numbers), so it doesn't buy you anything.

This issue has been litigated extensively, and in my view successfully, by Harvey Friedman on the Foundations of Mathematics mailing list, if you want to check its archives.


OK, but this doesn't answer the question. You intimate there are benefits but never say what they are.

The vast majority of mathematicians go their entire lives without using the word "category" in a paper. What are they missing out on?


Much of theoretical computer science uses categorical methods, for example, as semantics for type theory. In that field, such techniques are often more natural than set-theoretic ones due to issues of computability, decidability, etc. So at the very least, much of that research as originally written would be inaccessible to a non category theorist. Whether that field counts as mathematics and if it does whether it is worth missing out on depends on taste of course, but an example would be 'homotopy type theory' https://homotopytypetheory.org


HoTT is a strictly worse foundation for mathematics than ZFC, and in the end has to end up copying a bunch of the usual constructions anyway (like defining the real numbers). So this is not convincing.

One amusing problem is discussed here: https://mathoverflow.net/questions/289711/defining-sun-in-ho....

But suppose HoTT were equally good. What compelling reason is there for a working mathematician to learn it? We already know about set theory, and it meets all of our needs.


Honestly; I'm not sure; I'm a working computer scientist, not a working mathematician, and I use a proof assistant (Coq) all the time to have confidence that the proofs I write are correct (and more and more this is a requirement for publication in CS conferences). I want HoTT to succeed because it would turn some the axioms I must assume in the current Coq proof assistant into theorems, with significant implications for 'proof engineering' at scale (eg, verifying an OS kernel or compiler or database). When I read the math over flow post my emotional response is gladness that I can't accidentally conflate a topological space with an infinity groupoid. Many of the HoTT researchers are mathematicians (including the late Vladimir Voevodsky); presumably, the HoTT book (which I understand is available online) can give you a better answer than I can from the perspective of a working mathematician.


I can't comment on the CS stuff with any degree of expertise; the question was from the perspective of a mathematician wanting to know about mathematical applications. And not to pick on you, but again and again I ask category theory people to give me concrete examples, and again and again I get responses like yours, which amount to fairly vague gestures at theoretical purity. Contrast this with something like Galois theory. If an undergraduate asks me why Galois theory is important, I can point to about a half-dozen important problems (in terms of their place in the theory and in history) that are inaccessible without the concepts of Galois theory. For those problems, Galois theory not just another language or a cool way of looking at things, they are (as far as I know) intractable without it. I have never seen a category-theoretic example as compelling outside of algebraic topology and adjacent fields.

I also think it's important to point out for anyone reading that one certainly doesn't need category theory to do proof verification. Some other options, and some gripes with the Coq community along the lines of the previous paragraph, are listed here: https://xenaproject.wordpress.com/2020/02/09/where-is-the-fa....

Also, if your foundational theory doesn't allow me to (easily) define a fundamental geometric object like SU(n), then it's just a non-starter. Again, I have not been able to find a reason why we ought to endure such pain to do simple things when we already have a perfectly good foundational theory. Re: the HoTT book, it is precisely because I've looked at the book that I have these questions.


Perhaps have a look at the paper "Cubical Type Theory: A Constructive Interpretation of the Univalence Axiom" by Cohen et al. [1] which gives the typing rules in the early sections.

[1] http://drops.dagstuhl.de/opus/volltexte/2018/8475/


Your example using the verb "see" is not quite right, but a genuine ACI does exist in English: I want him to swim / I expect him to swim.

For another example, in Dutch it happens to work with "see" (zien): "ik zie hem zwemmen" (I see him [to] swim), it doesn't work with want, and with expect, "verwachten", you would have to add "to": "Ik verwacht hem te zwemmen" (I expect him to swim).


Yes, thanks for pointing that out, I was indeed taking the example from German (where "sehen" works with a genuine ACI like Dutch) and put it in English without much thought, particularly without realising that other examples do work in English.


"Unlike the rest of the language, numerals are written left-to-right."

Everyone makes this mistake. It is rather the case that in English (and other languages) numerals are written right-to-left. You can tell since, when reading right-to-left, you will know exactly what each number signifies. If you start from the left, you will not know what the first number signifies until you have reached the end of the whole number.

Interesting to learn though that in Arabic it is still pronounced from left to right, up until the tens.


It actually used to be right to left, just like the language! In some formal communication it's still the case, like when news channel announce new year "one and eighty and nine hundreds and a thousand. The change to read from left to right started fairly recently in the twentieth century, along with the change of the order of alphabets from أبجدهوز to أبتثجحخ.

I'm a native Arabic speaker, and yes I still struggle to both: speak P and hear P, Put no BroPlem!


I'm curious, when was it right to left?

In computer speak, the way we write numbers in English is 'big endian'. We write the most significant digit first.

The most common 'little endian' system is postal addresses, where we start with the smallest unit (name) then in, some cases, house number, street, city, country.

Note that roman numerals are commonly written in big ending way. So this practice is very old.


Yet historically English numbers were little endian, base twenty: "four and twenty" etc. Base 20 comes from Celtic roots I think, so perhaps other European languages have a similar history too.


Actually for example 1959 in Arabic the modern way to say it is "A thousand and nine hundred and fifty nine - الف وتسعمائة وتسعة وخمسون" but we also can say (and this is the old way) "Nine and fifty and nine hundred and a thousand - خمس ﻮ تسعون ﻮ تسع مئة وألف" which from tight to left.


"There is no team of brilliant and vaguely sinister engineers, cooking up ways to get you binge reading."

There is, and it's called an editor. This is especially why a lot of bestselling non-fiction reads so smoothly. I always have a bunch of these lying around for when I want something easy to get into.


One issue I have with Anki is that for every new format of cards I try, it takes about a year to know whether it works better or not.

For instance, I have taken a different approach to vocabulary since I found that single words (e.g. in French) were sometimes hard to recall after a very long time. I now almost always use a cloze sentence, and ideally the sentence I encountered the word in, so that I combine active recall with remembering the context, which usually settles the issue of multiple possibilities. However there is no way of knowing how this will work out in the long run. Does anyone have telltale signs that indicate if a card, although you may do well for months, will not do well past the, say, 1yr threshold?


You need to rehearse content or it will fade. However, if you know the answer 2 days later still, then it is in your long term memory (I saw a graph yesterday -probably from HN- where the recall percentage plummeted throughout the day but then stayed relatively stable). Anki has support for precisely this feature. When you start the card, you get to see dots in a color for a short while. This tells you how you perform on the card.

That being said, your question is specific to language (French in your example, but could just as well be code). Due to the complexity and possibilities you may not be able to recognize/cope with different combinations or situations.


I’ve wondered about a similar approach myself, since singular words are often hard to recall when needed, and don’t convey the subtle difference in usage between synonyms.

I also find repeatedly alternating between the foreign language word and native meanings to be jarring.

How do you choose your sentences? I mean, you mention using the sentence in which you encountered the word. But what is your source, eg. newspapers or adult-level novels?


My source is any novel/non-fiction or newspaper article I would be reading when encountering the word. I usually copy the sentence, maybe trim it a bit, and cloze on the word. The hint I give depends on the word, sometimes an English translation, sometimes a definition in the language itself, or sometimes a cognate word in a different language I know.

An additional benefit is that I remember the content of books more easily, since I am passively reminded of passages through Anki. This means I can put aside a book for months and get back into it without problems.

A problem, though, is that because I only use active recall of the word, I sometimes can't remember the meaning of the word when I encounter it, especially when the context is different. This can be quite subtle. E.g. I might put in "aborder" (to approach) in the context of "how would you _approach_ this question", but then when I read somewhere "the man was approaching" I would recognize the word, but be unable to make sense of it.

I have been trying to remedy this by sometimes choosing a more typical example sentence (from a dictionary or something) rather than the encountered one, which could be too poetic. But as I remarked, with all these changes it is hard to measure the effect in the long term.


For my language cards, I don’t have any English on them at all. They’re all Cloze deletions from either books or a dictionary, or sometimes declension tables.

I wrote more detail a few weeks ago: https://news.ycombinator.com/item?id=19666638


I tried using Anki for learning Ancient Greek words. Didn't work for me. I need the context and then deciphering the meaning becomes a challenge, which will be rewarding. Going through piles of word-for-word cards is dull.

I guess the best way is to read as much as one can, which is obvious in hindsight. Easier said than done, though.


For context, I use a basic card and add the context with the word itself. Eg.

front- define: comonotonic (probability theory, comonotonicity)

back- perfect positive dependence between the components of a random vector


So this is the kind of news that one ought to be careful with now that everyone has had a chance to read the Black Swan.

It has probably more to do with genetic drift. You cannot stumble upon a tribe, take some random property they all share, and then make causal inferences from all the other properties they all share. This holds especially when this property correlates with survival of the tribe; it is actually more informative to investigate a group of people with properties that reduce survival, e.g. bad heart conditions.


Could it be the law of small numbers?

If we broke the world population out into groups the size of this population - one of them would have to be the longest lived. Then we could send reporters there to create a narrative around how they lived.

I like the point that they don’t consume processed foods. As a Celiac, the comments about parasites are interesting (there are some therapeutic approaches here under evaluation).

> “But up until the day they die, the Tsimane are often very healthy.”

Is there a way to measure this?


What is it with this tendency to speak of a 'Nobel prize' when it concerns a different, lesser known prestigious prize in a field for which there happens to be no Nobel prize (Abel prize, Turing award)? It's a lazy way to try to draw attention to it, and pernicious even because it makes the whole purpose of these named awards questionable.


Perhaps you answered your own question: "It's a lazy way to try to draw attention to it". It's not crazy, though, to me at least. The general population has some understanding of the Nobel but doesn't with others (e.g., Fields Medal). Why not give a comparison point?

The only one that sticks out as pernicious to me is the Economics Nobel, which was explicitly created to sound like a Nobel prize, but really isn't one.


true that @Upvoter33.Nice response


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: