I realize that the traditional algorithm for multiplication is O(n^2) with the number of digits, but it’s kind of a questionable choice to use in a CS context where it’s almost always considered O(1)
Good article, though, I hadn’t considered it as an equivalence class before.
The “CS context” is very broad and diverse. In the context of teaching (which is, I think closest to what you mean), arithmetic being O(1) is primarily because it simplifies the analysis of most common algorithms. In this context, the most important thing is to identify some set of operations of interest and show how they scale to infinity. Instructors are primarily interested in ensuring students know how to do this analysis. The exact set of operations is unimportant. However, the side effect of this approach is that students walk away with weird, incorrect assumptions of how the hardware works.
In non-academic contexts where performance matters, one should almost never assume anything is O(1).
Good article, though, I hadn’t considered it as an equivalence class before.