[EDIT: Voilà a few words on each]
There are several ways of extending HM type inference. My answer is based on many, more or less successful, attempts at implementing some of them. The first one I stumbled upon is parametric polymorphism. Type systems trying to extend HM in this direction tend toward System F and so require type annotations. Two notable extensions in this direction I came across are :
HMF, it allows type inference for all System-F types, which means that you can have universal quantification "in the middle" of a type, their appearance is not implicitly situated at the highest scope like for HM polymorphic types. The paper clearly states that no clear rule exists as to how many and where type annotations may be needed. Also the types being those of System F, terms usually don't have a principal type.
MLF is not only an extension of HM, it's also an extension of System F that regain the principal type property of HM by introducing a kind of bounded quantification over types. A comparison has been made by the authors, MLF is strictly more powerful than HMF and annotations are only required for parameters used polymorphically.
Another way of extending HM is through variation of the constraint domain.
HM(X) is Hindley-Milner parameterised over a constraint domain X. In this approach, HM algorithm generates the constraints that are sent to a domain solver for X. For the usual HM, the domain solver is the unification procedure and the domain consists of the set of terms build from the types and the type variables.
Another example for X could be constraints expressed in the language of Presburger arithmetic ( in which case type inference / checking is decidable ) or in the language of Peano arithmetic ( no longer decidable ). X varies along a spectrum of theories, each with its own requirements regarding the amount and localisation of type annotations needed and ranging from not at all to all of them.
Haskell's type classes are also a kind of extension of the constraint domain by adding type predicates of the form
MyClass(MyType)( meaning that there exists an instance of MyClass for the type MyType ).
Type classes preserve type inference because they are basically an (almost) orthogonal concepts they implements adhoc polymorphism.
As an example, take a symbol
val :: MyClass a => afor which you can have instances
MyClass Betc. When you refer to that symbol in your code, it's actually because type inference is already performed that the compiler can infer which instance of the class to use. This means that the type of
valdepends on the context in which it is used. That's also why running a single
valstatements leads to an
ambiguous type error: the compiler can't infer any type based on the context.
For more advanced type systems like GADTs, type families, Dependent types, System (F)ω, etc., type are not anymore "types" they become complex computational objects. For example it means that two types not looking the same are not necessarily different. So type equality becomes not trivial ( at all ).
To give you an example of the actual complexity let's consider the dependent type of list :
NList a n where
a is the type of objects in the list and
n is its length.
The append function would have type
append :: NList a n -> NList a m -> NList a (n + m) and the zip function would be
zip :: NList a n -> NList b n -> NList (a, b) n.
Imagine now we have the lambda
\a: NList t n, b: NList t m -> zip (append a b) (append b a). Here the first argument of zip is of type
NList t (n + m) and the second of type
NList t (m + n).
Almost the same, but unless the type checker knows that "+" commutes on natural numbers, it must reject the function because (n + m) is not literally (m + n). It's no longer about type inference / type checking, it's about theorem proving.
Liquid types seem to be doing some dependent type inference. But as I understand it, it's not really dependent type, more something like usual HM types on which additional inference is made to compute static bounds.
I hope this helps.