Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A couple decades ago everyone was on static types. But then people got sick of the boilerplate, and in what I think was a backlash, dynamic languages like JavaScript, Python, Ruby, etc. took the world by storm. With the raised bar of developer expectations when it comes to agility, static type systems were forced to innovate, and now type-inference and related features are coming to all static languages and bringing us back around to a best-of-both worlds situation. Exciting times.


Static type systems with global type inference haven't had this problem since the beginning (for example OCaml, right around the time Java came out). However, for some obscure reasons the shittier the technology, the more chances it has at becoming popular.

Try Elm as a simple example (can be done in a weekend), it'll probably blow your mind. You don't have to write type annotations at all, but the compiler complains at build time if the same function is called with two different types in two places.


Local inference is a gift sent from heaven, global inference not so much. Reading OCAML (and F#) is exhausting because you have to look into the implementation of each function (or into a separate interface file) to figure out how it's supposed to be called and what it is going to return.


My IDE puts annotations on every function and I can hover over values. Typing them out seems pointless with the right tooling.


You can hover one thing at a time, you can skim a page with your eyes very quickly. Making important information cumbersome to access is a terrible idea.


One could argue that this is better covered by IDE/tooling showing you inferred types / suggestions.


If the function is printed in a journal article, or you just use something like 'less' to look at the file all that information is not available.

I find it kind of weird that we've normalized not being able to read the code outside of the proper program for it. Since those IDEs often cost money it starts to feel a bit like steps towards a walled garden to me, and I'm not sure its good for actual computer science.


I think development containers are a better way to deliver code with a paper.

Besides, the paper itself should optimise for clarity and global type inference languages allow type annotation where it might help. They simply let you skip type annotations which only add noise.


Not sure I'd say the IDEs "often cost money" these days


Wait for VSCode Enterprise.


That's what your IDE or LSP is for.

That's also the difference in 'philosophy' between OCaml, F# and Haskell. Haskell has the 'same' global type inference but the community sees them as (often only ;) documentation.


> However, for some obscure reasons the shittier the technology, the more chances it has at becoming popular.

The reasons are simple: it's promoted by a big company. Same reason why C# and Go are popular.


Exactly my experience. Coming from C# around year 2014s to php then nodejs (due to company setup), typescript is the best of both worlds.

Won't be back to static typing for a while unless I'll need higher precision and higher reliability module.


Damn, I’m missing Union Types in C# so much! Typescript make it so simple to compose types.


Indeed. Union, intersection types are godsend. Furthermore arguments don't need to be class instances / explicit type, object with fulfilled properties are enough. It's much much easier to work especially during templating / developing engine.


What you see as innovation is just slow diffusion into mainstream, the last decade is basically 80s sml family.

The funny .. or sad.. part is that the c#9 explicit ~verbose style would have been the only one accepted before. If you wrote implicitely typed variables people would get angry (I think there are many online articles about how java 9 `var` was bad)


> What you see as innovation is just slow diffusion into mainstream

It can be both. Type inference isn't a brand-new idea, but it still takes work to diffuse it into mainstream, practical languages, especially retrofitting it onto existing languages that weren't designed for it. That still counts as innovation in my book.


Yeah fair point. I'm just a bit salty that a lot of people only see the late stage effect onto their language and might assume that it came out of a vacuum you know. Then they look at you weird with your scheme, sml and prologs. Alas


I love type inference. Strong typing makes maintenance and refactoring much easier and with type inference the code looks very good.

I still remember how ugly and tedious STL iterators were in C++. Now it’s just “auto”. LINQ also wouldn’t work without type inference.


Some form of type inference was proposed for C: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n2735.pdf


Haskell has been quietly doing this since the 90s though. I guess being a research-first language it isn’t held back by trying to have mass appeal (eg to be C-like to be familiar)


Haskell is held back by pushing category theory into their tutorials. Want to do IO? Great, first learn about monads.


That's not true and never has been. Want to do I/O?

    main = do
        putStrLn "Who are you?"
        name <- readLn
        putStrLn ("Hello, " ++ name)
There. Do you really need to know something about monads to understand that example? No.

Do you need to know how do syntax and the assignment operator <- works? Sure. But that has nothing to do with category theory. That's just syntax.


To be fair, once you get a compiler error you’ll at least need to know:

a. do notation is converted to haskell b. what the bind and return functions do, for IO

So you can figure out;

c. Why your types are not lining up

To understand Haskell in general you need to realise do, bind and return are generic and can be used for not just IO but say for Maybe, List etc.

Basically you need to know most practical things about monads!

I had a bad time writing Haskell do notation until I understood monads. I used to write imperative code, try =, try <-, always undo typing back to known working states etc. to try and magic the code into compiling.


That's a good point. It was long enough since I learned this stuff that I mentally translate "Monad m => m a" into "IO String" for example in type errors, to the point of not even noticing how cryptic the generic types can be!


Cunningham's law in action, everyone!


I’ve not seen much category theory in tutorials. I have seen it in some conference talks but they are aimed at people who want that, and you don’t need to watch those.

I’m doing a take 2 now of getting into Haskell again but ignoring advanced language features (unless forced on me by a library) and ignoring category theory. The goal this time is using Haskell to just build stuff.


C# has type inference since a very long time ago.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: