see also this nice exposition about its workings from a more idealistic perspective
i have been watching the OEIS from the inside for a long time
it is somewhat uniquely poised as a site which both welcomes contribution from anyone and, unlike Wikipedia or StackExchange, requires moderators to manually approve of edits.
they also are chronically understaffed, and a lot of the staff they do have are chronically pedantic.
many see each open draft as an opportunity to teach a lesson in didacticism to contributors, by flagging up the issue they see in a comment and awaiting the original submitter's revision to remediate it.
the problem with this is that being thinly stretched and unable to guarantee prompt responses, this adds hours (at best) to weeks (usually only days) of waiting time before one may start a new draft slot.
and when the issues are things like leaving one added formula unsigned, or one program not demarcated as being in Python when it clearly starts by importing sympy, the unwillingness to fix what pointed out (and the implication that you could learn by fixing it yourself, and that it's worth slowing down your contribution by hours to days to teach you this lesson) comes across as condescending.
many contributors who demonstrably contributed interesting things in high volume over multiple-year periods have since departed, and others have snapped and been banned. (i myself was subject to a year-long ban for editing in others' draft slots, and after coming back resolved to use the wiki as my primary outlet to prevent a reoccurrence)
however, it helps a lot to bear in mind that the senior editors aren't malicious, and to be endlessly patient.
as you can possibly tell from this website, i like lightly-formatted monospaced text a lot; i've seen the lack of LaTeX support levied as a criticism/area of possible improvement a few times, but am relatively ambivalent on that front. an optional LaTeX frontend would be nice, but its usefulness would be inhibited by the existing strict rules about notation intended to minimise ambiguity (ie. we'd still have to write \(|\mathrm{Stirling1}(n,k)|\) instead of \(n\brack k\)). (I personally solve this problem by spinning up a userspace page with extended notes upon a sequence whenever I have them, but have been told this is an unintended use case and i should expect to fight breaking changes.)
the main two areas for improvement i can see are not in the database's presentation to readers at all, but interaction between contributors; the draft review system and the place of correspondence
i would like to propose the creation of a forum, and phpBB seems the best software for that, but as a (mostly inactive) ConwayLife community member, the biggest problem with their phpBB forum is the search
it refuses to accept your query if it contains words that are too common, and (to my knowledge) doesn't allow searching for exact matches of phrases; this is particularly an issue when you want to search for a rule (if someone has written it with a slash like B3/S23, it will interpret those as two different words) or an apgcode (xq4_153 is likewise two words, and xq4 is too common)
(currently, if you discover a pattern and want to know if it's known, you can enter it into Catagolue and see whether it's present in a forum-scraping census like oscthread_stdin. This doesn't tell you where it was discovered or by whom, and almost certainly won't tell you even in which haul it was uploaded; the attribute pages are usually empty for some reason)
however, i don't think any analogous problems would carry over to the OEIS
as someone who does a lot of the kind of maths that is amenable to computers, i have become familiar with a few CASs.
note that WolframScript is free for personal use, without requiring any proof of affiliation to an institution!
irrespective of my opinions on Stephen Wolfram or his antics, Mathematica is the most extensive one for most of the things i want to do
it also benefits from inertia with a vast ecosystem put together by community goodwill; i have gotten a great deal of use out of the RiscErgoSum library.
however, it seems some of its decisions are hostile to those wishing to depart.
a notorious example i've heard of is Mathematica's PDF exporting; this mathematica.se question should give you some sense of how bad it is, with the intention assumedly to incentivise not exporting them at all and instead requiring the recipient to also have Mathematica
notebooks published by Stephen Wolfram himself seem not to have any such issues, leading one to speculate that he has his own patched version. Irrespective, for the notebookiarily inclined, Jupyter resolves this.
i like its lispiness (see Hans Lundmark's notes for a brief overview) but dislike how anti-metaprogramming many fundamental pieces of it seem to be; see for instance this typical use case
In:= f=FullSimplify[D[y/(y-1)*((2-E^x)^(1-y)-1),{y,k}]/k!/.y->0]
In:= MatrixForm[Table[Series[f,{x,0,4}],{k,0,4}]]
Out//MatrixForm=
0
2 3 4
x x x 5
x + -- + -- + -- + O[x]
2 6 24
2 3 4
x 2 x 5 x 5
-- + ---- + ---- + O[x]
2 3 8
3 4
x 3 x 5
-- + ---- + O[x]
6 8
4
x 5
-- + O[x]
24
In:= MatrixForm[Table[List@@Series[f,{x,0,4}][[3]]*Range[k,4]!,{k,0,4}]]
Part::partd: Part specification 0[[3]] is longer than depth of object.
Thread::tdlen: Objects of unequal length in {0, 3} {1, 1, 2, 6, 24} cannot be combined.
Out//MatrixForm= {0, 3} {1, 1, 2, 6, 24}
{1, 1, 1, 1}
{1, 4, 15}
{1, 9}
{1}
context:
anyway, casting an empty Series to a List does not yield an empty list because the place where the list is meant to be is omitted.
this and many other problems like it make building anything beyond one-liners a more annoying and exceptionhandlingful experience than it ought to be!
it is also somewhat upsetting, principly, that its machinations are unknowable, and concerning that it's capable of being wrong. erroneous outputs are somewhat pervasive and can be encountered serendipitously; this question concerns one i found (a certain Laplace transform upon which Integrate returns a value 1 lower than the truth given by NIntegrate), which Daniel Lichtblau filed in the issue tracker, and this question one that persisted for the 7 years since its discovery
it seems guarantees of implementation correctness (via a proof language) would be at least as useful for a CAS
i am a programmer and a mathematician, but (currently) identify much more strongly with both of those things as an amateur than a student or professional or otherwise one with a monetary incentive to crank things out on a deadline or at scale
to this end, i consider myself a largely unbiased judge of LLMs
i had started interacting with ChatGPT in 2023, by giving it some one-liners in my typical idiosyncratic style and asking it to discern their purpose; to this end I was very thoroughly disappointed, since any sign of understanding it appeared to bear was quickly dispelled by rewriting them with the variable names ordered alphabetically by appearance and finding it display the same confidence in nonsense, or replacing i,j,x with n,k,j and watching it suddenly believe that my Faddeev-LeVerrier minpoly-finder was actually the binomial theorem
i did this because i had heard from some of my peers strong conviction in its intelligence and assumed that this, a test outside of what it had been hyperoptimised to appear intelligent upon, would be a good way of testing raw reasoning; upon getting my answer, i lost interest
then, in early 2025, a close friend on Discord (whose judgement i respect a lot more than that of the aforementioned peers) told me of their success at using DeepSeek-R1 on a few maths problems, and found that it was somewhat useful for their own research maths, which made me curious
however, the tipping point of my whole outlook upon them came when i was discussing a problem with my friend
being new to this area and self-taught (having approached it sideways, starting from being intrigued by Ramanujan's finding listed on A000110's page), I had some grasp of how the tools worked but little of how to apply them, and found myself hitting walls and frustration in attempted derivations repeatedly
i gave Cosmia a very thorough description of my problem, which she inputted verbatim into ChatGPT, which was able to find a similar formula on its own in reasoning mode, translating to \(A162973(n)\sim\frac{n!}e\left(H_n-1+\sum_{j=0}^\infty\frac{\int_{x=-1}^0\mathrm{Bell}_j(x)}{n^{j+1}}\right)\).
i later learned about Watson's lemma (and how its converse is an extremely useful incorrect theorem that very often holds or almost-holds), and did some more interesting things with it (that are besides the point for here) in
ChatGPT in particular is capable of being extremely useful at a surprisingly broad range of tasks; specifically, ones for which the possibility of hallucination is mitigated by default (since checking its correctness on them is inexpensive time/effortwise)
it is useful in a variety of applications i wouldn't have dared guess possible before witnessing them, and there is a spark of something extremely interesting within it worth nurturing and investigating
i recognise that this view is quite reminiscent of the 'hype' from CEOs and investors (which many have come to loathe), but i (frugal as i am) have not spent any money on any LLM yet!
if i were appointed benevolent dictator for life of Earth, i would establish a working group to produce a high-quality corpus of maths training data at scale by hand (many hands)
my interest in generating functions and Stirling numbers has given me some insight as to where arithmetic slips originate; fragmented and ambiguous notation is a very big factor! the most common frustration i have is it forgets that a pair of square brackets are a coefficient extractor and pulls a factorial out of them instead of multiplying both the inside and the outside by the same amount, and it confuses the kinds of Stirling number for each other more than i think it would if Knuth's \brack and \brace were universally adopted for them
i understand that LLMs (and their counterpart of diffusion models that proliferated after a breakthrough at about the same time) are capable of evil (dilution of the internet as a repository of humanity's thought and media, as well as making SEO spam much more difficult to reliably detect automatically, including by untrained humans), but this should not tinge your views of its uses for good
see also the AlphaEvolve technical report/whitepaper and (more pertinently for those interested in its application) Mathematical exploration and discovery at scale
this has most likely cost thousands of person-hours of productivity; i spent an embarrassingly long time trying to work out why OpenMP wasn't working for anything before learning of it
the origin of this debacle is that gcc updated to GPLv3, which would force macOS to be open-source, and so it switched to clang but didn't want to break existing programs which had gcc hardcoded by giving a clear error message
i strongly advise against both if you can help it