• 1 Post
  • 8 Comments
Joined 2 years ago
cake
Cake day: June 13th, 2023

help-circle
  • “The monkey about whose ability to see my ears I’m wondering”.

    Part of the issue is that the thing you’re wondering about needs to be a noun, but the verb “can” doesn’t have an infinitive or gerund form (that is, there’s no purely grammatical way to convert it to a noun, like *“to can” or *“canning”). We generally substitute some form of “to be able to”, but it’s not something our brain does automatically.

    Also, there’s an implied pragmatic context that some of the other comments seem to be overlooking:

    • The speaker is apparently replying to a question asking them to indicate one monkey out of several possibilities

    • The other party is already aware of the speaker’s doubts about a particular monkey’s ear-seeing ability

    • The reason this doubt is being mentioned now is to identify the monkey, not to declare the doubt.


    • I don’t think it’s useful for a lot of what it’s being promoted for—its pushers are exploiting the common conception of software as a process whose behavior is rigidly constrained and can be trusted to operate within those constraints, but this isn’t generally true for machine learning.

    • I think it sheds some new light on human brain functioning, but only reproduces a specific aspect of the brain—namely, the salience network (i.e., the part of our brain that builds a predictive model of our environment and alerts us when the unexpected happens). This can be useful for picking up on subtle correlations our conscious brains would miss—but those who think it can be incrementally enhanced into reproducing the entire brain (or even the part of the brain we would properly call consciousness) are mistaken.

    • Building on the above, I think generative models imitate the part of our subconscious that tries to “fill in the banks” when we see or hear something ambiguous, not the part that deliberately creates meaningful things from scratch. So I don’t think it’s a real threat to the creative professions. I think they should be prevented from generating works that would be considered infringing if they were produced by humans, but not from training on copyrighted works that a human would be permitted to see or hear and be affected by.

    • I think the parties claiming that AI needs to be prevented from falling into “the wrong hands” are themselves the most likely parties to abuse it. I think it’s safest when it’s open, accessible, and unconcentrated.





  • For over a decade, complexity scientist Peter Turchin and his collaborators have worked to compile an unparalleled database of human history – the Seshat Global History Databank. Recently, Turchin and computer scientist Maria del Rio-Chanona turned their attention to artificial intelligence (AI) chatbots, questioning whether these advanced models could aid historians and archaeologists in interpreting the past.

    Peter Turchin and his collaborators don’t have a great record of understanding human history themselves—their basic shtick has been to try to validate an Enlightenment-era, linear view of human history with statistics from their less-than-rigorous database, with less-than-impressive results. I wouldn’t necessarily expect an AI to outperform them, but I wouldn’t trust their evaluation of it, either.


  • Cheung explains, “Our work asks: what is the precise math problem whose solution is the scattering amplitude of strings? And is it the unique solution?”. He adds, “This work can’t verify the validity of string theory, which like all questions about nature is a question for experiment to resolve. But it can help illuminate whether the hypothesis that the world is described by vibrating strings is actually logically equivalent to a smaller, perhaps more conservative set of bottom up assumptions that define this math problem.”

    OK but the only thing string theory ever had going for it is that the math is exceptionally pretty, so is this actually adding anything new?