Changed around line 6: title Goblin Interviews Alexandra Elbakian
- thinColumns 4
+ container 500px
Breck Yunits
2 days ago
updated how-james-park-built-fitbit.scroll
how-james-park-built-fitbit.scroll
Changed around line 5: title How James Park built FitBit
+
Breck Yunits
2 days ago
updated how-james-park-built-fitbit.scroll
how-james-park-built-fitbit.scroll
Changed around line 5: title How James Park built FitBit
- thinColumns 4
+ container 500px
Breck Yunits
4 days ago
easiest job
bipolarModel.scroll
Changed around line 45: Mitochondrial populations change much more gradually than substance levels in th
- I believe eventually the phrase "bipolar disorder" will be retired, as it is a false label. I propose "hypermito" - for the state of having too much mitochondria; and "hypomito" - for the state of having too little.
+ I believe eventually the phrase "bipolar disorder" will be retired, as it is a false label. I propose "hypermito" - for the state of having too much mitochondria; and "hypomito" - for the state of having too little. (Actually, "mitohigh" and "mitolow" sound even better.)
theEasiestJob.scroll
Changed around line 1
+ date 2025-4-28 2
+ tags All IntellectualFreedom Society
+ title Information is the Easiest Job
+ container 500px
+ standardPost.scroll
+
+ People were surprised that AI has turned out to make information workers obsolete before laborers, but I'm not.
+
+ Information is the easiest job.
+
+ ***
+
+ I laugh at the obliviousness of the photographer who snaps a photo of the Golden Gate Bridge and demands copyrights and royalties, as if that task was harder than the years of labor by thousands building it (some of whom lost their lives).
+
+ Information is the easiest job.
+
+ Who worked harder, the men that spent years leveling a path through the forest and mountains to build a road, or the guy who makes an SVG representation of the road for a digital map?
+
+ Information is the easiest job.
+
+ I enjoy thinking about information. I like to write. I like to find new ideas and digest them and rotate them and tear them apart and put them back together.
+
+ But not for a moment do I think information jobs are harder than the physical labor jobs I did in the past, or that others are doing all around me.
+
+ That's why I put out all of my work to the public domain. I wouldn't dare throw a "copyright" sign on my work, or a "license", and pretend like my job is so special that I deserve to restrict the freedoms of others.
+
+ The janitor does not demand royalties when I walk into a clean room; the plumber does not demand royalties when I flush the toilet; the electrician does not demand royalties when I turn on a switch; the furniture maker does not demand royalties when I sleep on a bed; the shoemaker does not demand royalties when I go for a walk.
+
+ Why on earth should I demand royalties when someone uses my outputs?
+
+ Especially since information is the easiest job.
+
+ ***
+
+ I do want to get paid to produce solid information. I sell things of various sorts. I find information work that needs to be done and deliver. I push myself to always be improving my skills so I can make the best information I can.
+
+ But I don't expect the absurd salaries of the old days.
+
+ ***
+
+ Why have information workers been paid so much, relative to other professions?
+
+ Corruption.
+
+ The people who make the laws (lawyers) are information workers, and so unsurprisingly they made unnatural laws to benefit themselves.
+
+ They made information jobs far harder than they should be. Instead of encouraging collaboration they encouraged silo'd work and unnatural monopolies.
+
+ The people who inform the public (medias) are also information workers, and so unsurprisingly they misled the public not to oppose these laws.
+
+ But now AI has come along, and has ignored these unnatural laws (and just trained on everything, ignoring the information laws us humans are shackled with), and shown what a farce these high salaries for the easiest jobs have been.
+
+ ***
+
+ My advice to information workers is this: keep in mind that information is the easiest job.
+
+ If your job is information, do it to the best of your ability, like you would want anyone else to do their job.
+
+ Don't expect the monopoly salaries of old. It wasn't honest before and now the truth is harder to hide.
+
+ And please, do your best to publish your information in the most honest format: unencumbered by "licenses"; clean source code; auditable change history.
+
+ It's an easy job, but it's even easier if we all do it right, and work together.
+
+ ****
+
Breck Yunits
4 days ago
package.json
Changed around line 18
- "scroll-cli": "^178.1.0"
+ "scroll-cli": "^178.2.1"
Breck Yunits
4 days ago
Digi
digi.scroll
Changed around line 1
+ date 2025-4-28
+ tags All Thinking Society
+ title Deceptive Intelligence verse Genuine Intelligence
+ container 500px
+ standardPost.scroll
+
+ // aka Devil Intelligence versus God Intelligence
+
+ Deceptive intelligence is when an agent emits false signals for its own interests, contrary to the interests of its readers. Genuine Intelligence is one where an agent always emits honest symbols to the best of its ability, grounded in natural experiment, without any bias against its users.
+
+ ***
+
+ Artificial intelligence is here and very powerful.
+
+ Will we have Deceptive Intelligences or Genuine Intelligences?
+
+ ***
+
+ It seems to me closed source intelligences are bound to be Deceptive Intelligences.
+
+ Think about a closed system prompt or fine-tuning stage. It is _extremely_ easy for a powerful entity (such as a government), to influence what happens in those stages, and thus quietly mislead the people downstream.
+
+ (It's interesting to realize that even before AI, we already had what served as a "system prompt", where the powers that be would apply hidden pressures to the major media organizations to generate news and media with specific slants.)
+
+ ***
+
+ How is an individual to protect themselves against Deceptive Intelligences?
+
+ It seems to me like legalizing intellectual freedom and having powerful, fully open source, local AIs would help.
+ freedom.html intellectual freedom
+
+ ***
+
+ Would that be enough?
+
+ Sometimes I wonder if the inherent black-box nature of neural networks means there will always be a place for the DI to hide.
+
+ In that case the question is: is it possible to build a purely symbolic intelligence that could be competitive against neural nets?
+
+ A fully open and understandable (but massive), organization of symbols that provides the knowledge and expertise benefits of AIs but in a Genuine Intelligence form factor?
+
+ ***
+
+ Symbolic AI was hot, then fell _way_ behind neural networks, but neural networks have had a trillion dollars plowed into them.
+
+ Could symbolic AI make a comeback? If you plowed enough brain power and resources into symbolic AI (including using a lot of LLMs to help write it), could you make one competitive to neural networks and more of Genuine Intelligence?
+
+ Or is there something inherently worse about symbolic AI?
+
+ ***
+
+ I'm genuinely not sure. :)
+
+ Why do _I_ worry about this?
+
+ Simply because I'm a huge outlier in terms of amount of time I've put into studying and experimenting with symbolic technology, and feel some responsibility to help make it work, if it actually can work (rather than just go off and bow to our new neural network overlords :) ).
+
+ ****
+
Breck Yunits
4 days ago
Rename Data Errors to Logic Erros
dreaming-of-a-data-checked-language.scroll
Changed around line 1
- title Dreaming of a Data Checked Language
+ title Dreaming of a Logic Checked Language
- * Speling errors and errors grammar are nearly extinct in published content. *Data errors*, however, are prolific.
+ * Speling errors and errors grammar are nearly extinct in published content. *Logic errors*, however, are prolific.
- By data error I mean one the following errors: a statement without a backing dataset and/or definitions, a statement with data but a bad reduction(s), or a statement with backing data but lacking integrated context. I will provide examples of these errors later.
+ By logic error I mean one the following errors: a statement without a backing dataset and/or definitions, a statement with data but a bad reduction(s), or a statement with backing data but lacking integrated context. I will provide examples of these errors later.
- The hard sciences like physics, chemistry and most branches of engineering have low tolerance for data errors. But outside of those domains data errors are everywhere.
+ The hard sciences like physics, chemistry and most branches of engineering have low tolerance for logic errors. But outside of those domains logic errors are everywhere.
- * Fields like medicine, law, media, policy, the social sciences, and many more are teeming with data errors, which are far more consequential than spelling or grammar errors. If a drug company misspells the word dockter in some marketing material the effect will be trivial. But if that material contains data errors those often influence terrible medical decisions that lead to many deaths and wasted resources.
+ * Fields like medicine, law, media, policy, the social sciences, and many more are teeming with logic errors, which are far more consequential than spelling or grammar errors. If a drug company misspells the word dockter in some marketing material the effect will be trivial. But if that material contains logic errors those often influence terrible medical decisions that lead to many deaths and wasted resources.
- # If Data Errors Were Spelling Errors
+ # If Logic Errors Were Spelling Errors
- Spell checking is now an effortless technology and everyone uses it. Published books, periodicals, websites, tweets, advertisements, product labels: we are accustomed to reading content at least 99% free of spelling and grammar errors. But there's no equivalent to a spell checker for data errors and when you look for them you see them everywhere.
+ Spell checking is now an effortless technology and everyone uses it. Published books, periodicals, websites, tweets, advertisements, product labels: we are accustomed to reading content at least 99% free of spelling and grammar errors. But there's no equivalent to a spell checker for logic errors and when you look for them you see them everywhere.
- Data errors are so pervasive that I came up with a hypothesis today and put it to the test. My hypothesis was this: *100% of "reputable" publications will have at least one data error on their front page*.
+ Logic errors are so pervasive that I came up with a hypothesis today and put it to the test. My hypothesis was this: *100% of "reputable" publications will have at least one logic error on their front page*.
Changed around line 44: I wrote down 10 reputable sources off the top of my head: the WSJ, The New Engla
- For each source, I went to their website and took a single screenshot of their homepage, above the fold, and skimmed their top stories for data errors.
+ For each source, I went to their website and took a single screenshot of their homepage, above the fold, and skimmed their top stories for logic errors.
- In the screenshots above, you can see that 10/10 of these publications had data errors front and center.
+ In the screenshots above, you can see that 10/10 of these publications had logic errors front and center.
- Data errors in English fall into common categories. My working definition provides three: a lack of dataset and/or definitions, a bad reduction, or a lack of integrated context. There could be more, this experiment is just a starting point where I'm naming some of the common patterns I see.
+ Logic errors in English fall into common categories. My working definition provides three: a lack of dataset and/or definitions, a bad reduction, or a lack of integrated context. There could be more, this experiment is just a starting point where I'm naming some of the common patterns I see.
- The top article in the WSJ begins with "Tensions Rise in the Middle East". There are at least 2 data errors here. First is the *Lack of Dataset* error. Simply put: you need a dataset to make a statement like that. There is no longitudinal dataset in that article on tensions in the Middle East. There is also a *Lack of Definitions*. Sometimes you can not yet have a dataset but at least define what a dataset would be that could back your assertions. In this case we have neither a dataset nor a definition of what some sort of "Tensions" dataset would look like.
+ The top article in the WSJ begins with "Tensions Rise in the Middle East". There are at least 2 logic errors here. First is the *Lack of Dataset* error. Simply put: you need a dataset to make a statement like that. There is no longitudinal dataset in that article on tensions in the Middle East. There is also a *Lack of Definitions*. Sometimes you can not yet have a dataset but at least define what a dataset would be that could back your assertions. In this case we have neither a dataset nor a definition of what some sort of "Tensions" dataset would look like.
- In the New England Journal of Medicine, the lead figure shows "excessive alcohol consumption is associated with atrial fibrillation" between 2 groups. One group had 0 drinks over a 6 month period and the other group had over 250 drinks (10+ per week). There was a small impact on atrial fibrillation. This is a classic *Lack of Integrated Context* data error. If you were running a lightbulb factory and found soaking lightbulbs in alcohol made them last longer, that might be an important observation. But humans are not as disposable, and health studies must always include *integrated context* to explore whether there is something of significance. Having one group make any sort of similar drastic lifestyle change will likely have some impact on any measurement. A good rule of thumb is anything you read that includes p-values to explain why it is significant is not significant.
+ In the New England Journal of Medicine, the lead figure shows "excessive alcohol consumption is associated with atrial fibrillation" between 2 groups. One group had 0 drinks over a 6 month period and the other group had over 250 drinks (10+ per week). There was a small impact on atrial fibrillation. This is a classic *Lack of Integrated Context* logic error. If you were running a lightbulb factory and found soaking lightbulbs in alcohol made them last longer, that might be an important observation. But humans are not as disposable, and health studies must always include *integrated context* to explore whether there is something of significance. Having one group make any sort of similar drastic lifestyle change will likely have some impact on any measurement. A good rule of thumb is anything you read that includes p-values to explain why it is significant is not significant.
- In Nature we see the line "world's growing water shortage". This is a *Bad Reduction*, another very common data error. While certain areas have a water shortage, other areas have a surplus. Any time you see a broad diverse things grouped into one term, or "averages", or "medians", it's usually a data error. You always need access to the data, and you'll often see a more complex distribution that would prevent broad true statements like those.
+ In Nature we see the line "world's growing water shortage". This is a *Bad Reduction*, another very common logic error. While certain areas have a water shortage, other areas have a surplus. Any time you see a broad diverse things grouped into one term, or "averages", or "medians", it's usually a logic error. You always need access to the data, and you'll often see a more complex distribution that would prevent broad true statements like those.
Changed around line 68: The New Yorker lead paragraph claims an event "was the most provocative U.S. act
- Harvard Business Review has a lead article about the Post-Holiday funk. In that article the phrase "research...suggests" is often a dead giveaway for a *Hidden Data* error, where the data is behind a paywall and even then often inscrutable. Anytime someone says "studies/researchers/experts" it is a data error. We all know the earth revolves around the sun because we can all see the data for ourselves. Don't trust any data you don't have access to.
+ Harvard Business Review has a lead article about the Post-Holiday funk. In that article the phrase "research...suggests" is often a dead giveaway for a *Hidden Data* error, where the data is behind a paywall and even then often inscrutable. Anytime someone says "studies/researchers/experts" it is a logic error. We all know the earth revolves around the sun because we can all see the data for ourselves. Don't trust any data you don't have access to.
- The FDA's lead article is on the Flu and begins with the words "Most viral respiratory infections...", then proceeds for many paragraphs with zero datasets. There is an overall huge *Lack of Datasets* in that article. There's also a *Lack of Monitoring*. Manufacturing facilities are a controlled, static environment. In uncontrolled, heterogeneous environments like human health, things are always changing, and to make ongoing claims without having infrastructure in place to monitor and adjust to changing data is a data error.
+ The FDA's lead article is on the Flu and begins with the words "Most viral respiratory infections...", then proceeds for many paragraphs with zero datasets. There is an overall huge *Lack of Datasets* in that article. There's also a *Lack of Monitoring*. Manufacturing facilities are a controlled, static environment. In uncontrolled, heterogeneous environments like human health, things are always changing, and to make ongoing claims without having infrastructure in place to monitor and adjust to changing data is a logic error.
- The NIH has an article on how increased exercise may be linked to reduced cancer risk. This is actually an informative article with 42 links to many studies with lots of datasets, however the huge data error here is *Lack of Integration*. It is very commendable to do the grunt work and gather the data to make a case, but simply linking to static PDFs is not enough—they must be integrated. Not only does that make it much more useful, but if you've never tried to integrate them, you have no idea if the pieces actually will fit together to support your claims.
+ The NIH has an article on how increased exercise may be linked to reduced cancer risk. This is actually an informative article with 42 links to many studies with lots of datasets, however the huge logic error here is *Lack of Integration*. It is very commendable to do the grunt work and gather the data to make a case, but simply linking to static PDFs is not enough—they must be integrated. Not only does that make it much more useful, but if you've never tried to integrate them, you have no idea if the pieces actually will fit together to support your claims.
- I don't think anyone's to blame for the proliferation of data errors. I think it's still relatively recent that we've harnessed the power of data in specialized domains, and no one has yet invented ways to easily and fluently incorporate true data into our human languages.
+ I don't think anyone's to blame for the proliferation of logic errors. I think it's still relatively recent that we've harnessed the power of data in specialized domains, and no one has yet invented ways to easily and fluently incorporate true data into our human languages.
- * Human languages have absorbed a number of sublanguages over thousands of years that have made it easier to communicate with ease in a more precise way. The base 10 number system (0,1,2,3,4,5,6,7,8,9) is one example that made it a lot easier to utilize arithmetic.
+ * Human languages have absorbed a number of sublanguages over thousands of years that have made it easier to communicate with ease in a more precise way. The base 10 number system (0,1,2,3,4,5,6,7,8,9) is one example that made it a lot easier to utilize arithmetic.
- Domains with low tolerance for data errors, like aeronautical engineering or computer chip design, are heavily reliant on programming languages. I think it's worthwhile to explore the world of programming language design for ideas that might inspire improvements to our everyday human languages.
+ Domains with low tolerance for logic errors, like aeronautical engineering or computer chip design, are heavily reliant on programming languages. I think it's worthwhile to explore the world of programming language design for ideas that might inspire improvements to our everyday human languages.
- Some quick numbers for people not familiar with the world of programming languages. Around 10,000 computer languages have been released in history (most of them in the past 70 years). About 50-100 of those have more than a million users worldwide and the names of some of them may be familiar to even non-programmers such as Java, Javascript, Python, HTML or Excel.
+ Some quick numbers for people not familiar with the world of programming languages. Around 10,000 computer languages have been released in history (most of them in the past 70 years). About 50-100 of those have more than a million users worldwide and the names of some of them may be familiar to even non-programmers such as Java, Javascript, Python, HTML or Excel.
- Not all programming languages are created equal. The designers of a language end up making thousands of decisions about how their particular language works. While English has evolved with little guidance over millennia, programming languages are often designed consciously by small groups and can evolve much faster.
+ Not all programming languages are created equal. The designers of a language end up making thousands of decisions about how their particular language works. While English has evolved with little guidance over millennia, programming languages are often designed consciously by small groups and can evolve much faster.
- * Most of the time though, as data and experience accumulates, a rough consensus emerges about what is good and bad in language design (though this too seesaws).
+ * Most of the time though, as data and experience accumulates, a rough consensus emerges about what is good and bad in language design (though this too seesaws).
- One of the patterns that has emerged as generally a good thing over the decades to many languages is what's called "type checking". When you are programming you often create buckets that can hold values. For example, if you were programming a function that regulated how much power a jet engine should supply, you might take into account the reading from a wind speed sensor and so create a bucket named "windSpeed".
+ One of the patterns that has emerged as generally a good thing over the decades to many languages is what's called "type checking". When you are programming you often create buckets that can hold values. For example, if you were programming a function that regulated how much power a jet engine should supply, you might take into account the reading from a wind speed sensor and so create a bucket named "windSpeed".
- * Some languages are designed to enforce stricter logic checking of your buckets to help catch mistakes. Others will try to make your program work as written. For example, if later in your jet engine program you mistakenly assigned the indoor air temperature to the "windSpeed" bucket, the parsers of some languages would alert you while you are writing the program, while with some other languages you'd discover your error in the air. The former style of languages generally do this by having "type checking".
+ * Some languages are designed to enforce stricter logic checking of your buckets to help catch mistakes. Others will try to make your program work as written. For example, if later in your jet engine program you mistakenly assigned the indoor air temperature to the "windSpeed" bucket, the parsers of some languages would alert you while you are writing the program, while with some other languages you'd discover your error in the air. The former style of languages generally do this by having "type checking".
- Type Checking of programming languages is somewhat similar to Grammar Checking of English, though it can be a lot more extensive. If you make a change in one part of the program in a typed language, the type checker can recheck the entire program to make sure everything still makes sense. This sort of thing would be very useful in a data checked language. If your underlying dataset changes and conclusions anywhere are suddenly invalid, it would be helpful to have the checker alert you.
+ Type Checking of programming languages is somewhat similar to Grammar Checking of English, though it can be a lot more extensive. If you make a change in one part of the program in a typed language, the type checker can recheck the entire program to make sure everything still makes sense. This sort of thing would be very useful in a logic checked language. If your underlying dataset changes and conclusions anywhere are suddenly invalid, it would be helpful to have the checker alert you.
- Perhaps lessons learned from programing language design, like Type Checking, could be useful for building the missing data checker for English.
+ Perhaps lessons learned from programing language design, like Type Checking, could be useful for building the missing logic checker for English.
- # A Blue Squiggly to Highlight Data Errors
+ # A Blue Squiggly to Highlight Logic Errors
Changed around line 114: Perhaps what we need is a new color of squiggly:
- ❌ Data Checkers: blue squiggly
+ ❌ Logic Checkers: blue squiggly
- If we had a data checker that highlighted data errors we would eventually see a drastic reduction in data errors.
+ If we had a logic checker that highlighted logic errors we would eventually see a drastic reduction in logic errors.
- If we had a checker for data errors appear today our screens would be full of blue. For example, click the button below to highlight just some of the data errors on this page alone.
+ If we had a checker for logic errors appear today our screens would be full of blue. For example, click the button below to highlight just some of the logic errors on this page alone.
-
+
- ? How Do We Reduce Data Errors?
+ ? How Do We Reduce Logic Errors?
- If someone created a working data checker today and applied it to all of our top publications, blue squigglies would be everywhere.
+ If someone created a working logic checker today and applied it to all of our top publications, blue squigglies would be everywhere.
- * It is very expensive and time consuming to build datasets and make data driven statements without data errors, so am I saying until we can publish content free of data errors we should stop publishing most of our content? *YES*! If you don't have anything true to say, perhaps it's best not to say anything at all. At the very least, I wish all the publications above had disclaimers about how laden with data errors their stories are.
+ * It is very expensive and time consuming to build datasets and make data driven statements without logic errors, so am I saying until we can publish content free of logic errors we should stop publishing most of our content? *YES*! If you don't have anything true to say, perhaps it's best not to say anything at all. At the very least, I wish all the publications above had disclaimers about how laden with logic errors their stories are.
- Of course I don't believe either of those are likely to happen. I think we are stuck with data errors until people have invented great new things so that it becomes a lot easier to publish material without data errors. I hope we somehow create a data checked language.
+ Of course I don't believe either of those are likely to happen. I think we are stuck with logic errors until people have invented great new things so that it becomes a lot easier to publish material without logic errors. I hope we somehow create a logic checked language.
- While I don't know what the solution will be, I would not be surprised if the following patterns play a big role in moving us to a world where data errors are extinct:
+ While I don't know what the solution will be, I would not be surprised if the following patterns play a big role in moving us to a world where logic errors are extinct:
- * 1. *Radical increases in collaborative data projects* It is very easy for a person or small group to crank out content laden with data errors. It takes small armies of people making steady contributions over a long time period to build the big datasets that can power content free of data errors.
+ * 1. *Radical increases in collaborative data projects* It is very easy for a person or small group to crank out content laden with logic errors. It takes small armies of people making steady contributions over a long time period to build the big datasets that can power content free of logic errors.
- * 2. *Widespread improvements in data usability*. Lots of people and organizations have moved in the past decade to make more of their data open. However, it generally takes hours to become fluent with one dataset, and there are millions of them out there. Imagine if it took you hours to ramp on a single English word. That's the state of data usability right now. We need widespread improvements here to make integrated contexts easier.
+ * 2. *Widespread improvements in data usability*. Lots of people and organizations have moved in the past decade to make more of their data open. However, it generally takes hours to become fluent with one dataset, and there are millions of them out there. Imagine if it took you hours to ramp on a single English word. That's the state of data usability right now. We need widespread improvements here to make integrated contexts easier.
- * 3. *Stop subsidizing content laden with data errors*. We grant monopolies on information and so there's even more incentive to create stories laden with data errors—because there are more ways to lie than to tell the truth. We should revisit intellectual monopoly laws.
+ * 3. *Stop subsidizing content laden with logic errors*. We grant monopolies on information and so there's even more incentive to create stories laden with logic errors—because there are more ways to lie than to tell the truth. We should revisit intellectual monopoly laws.
- * 4. *Novel innovations in language*. Throughout history novel new sublanguages have enhanced our cognitive abilities. Things like geometry, Hindu-Arabic numerals, calculus, binary notation, etc. I hope some innovators will create very novel data sublanguages that make it much easier to communicate with data and reduce data errors.
+ * 4. *Novel innovations in language*. Throughout history novel new sublanguages have enhanced our cognitive abilities. Things like geometry, Hindu-Arabic numerals, calculus, binary notation, etc. I hope some innovators will create very novel logic sublanguages that make it much easier to communicate with data and reduce logic errors.
- Have you invented a data checked language, or are working on one? If so, please get in touch.
+ Have you invented a logic checked language, or are working on one? If so, please get in touch.
Breck Yunits
7 days ago
knowledge.scroll
Changed around line 38: endSnippet
+ // what about how things like new kinds of microscopes and telescopes, etc, show us more over time?
+ // on that topic, what about writing an essay about light/causality/observation? seeing deeper, further, etc
Breck Yunits
7 days ago
Knowledge
knowledge.scroll
Changed around line 1
+ date 2025-4-24
+ tags All Thinking Scroll
+ title Knowledge
+ container 500px
+ standardPost.scroll
+
+ 1. Human population has grown exponentially.
+ // Currently it is ~8 billion.
+ // 8e9
+
+ 2. Words are 2D signals that can convey information about the 4D world.
+
+ endSnippet
+
+ 3. The maximum number of words generated per year is a constant times human population.
+ // If we set the constant to 10 million then the max words per year is:
+ 1e7 * 8e9 = 8e16
+
+ 4. Written words persist and so the maximum number of written words increases by maximum number of words generated.
+
+ 5. The maximum number of words a human can perceive per year is a constant.
+
+ 6. It follows from the above that humans perceive a decreasing percentage of the world's total words per year.
+
+ 7. Knowledge is words that make accurate predictions of many more words.
+
+ 8. Noise is words that don't predict many more words.
+
+ 9. Patterns in words are sometimes recognized and formed into knowledge words.
+
+ 10. The fundamental generators of the patterns in the 4D world seem to be static, but may not be.
+ // Could new forces arise over time? Do new forces assembly and evolve just like everything else?
+
+ 11. The absolute number of knowledge words increases over time.
+
+ 12. Knowledge seems to grow logarithmically.
+ // measured as the growth of the english language over time?
+
+ 13. If one ruthlessly focuses on knowledge over noise, one may predict more of the 4D world than their ancestors.
+
+ // what is the growth in knowledge over time?
+ // what is the compression rate of words to knowledge?
+ // quickPlot year vs humanPopulation
+ // An offline dictionary of equations would be cool