A while ago, I tried to make sense of differential 1-forms by thinking of how they're used in terms of a type system. I got approximately nowhere fast (you can see the post!). I've recently been reading Penrose's Road to Reality which describes them in near-laymans terms, and this seemed a good point to try again...
He defines a vector field as a function which takes another function (from point in the manifold to real value), and then at each point calculates some weighted sum of partial derivatives, effectively taking the derivative in a particular direction. i.e. a vector field E acts on a scalar function P defined over the manifold - E(P). E is of type (Point -> Real) -> (Point -> Real).
He then defines dP = dx dP/dx + dy dP/dy + ... (you'll have to insert your own curly 'd's as appropriate), where the dx and dy and so on are differential 1-forms. Thus, ahem, dP . E = E(P). For some variant on a dot product which is not really explained.
So, one minute E is a function, then it's something you can dot product. This is probably in some vector space of functions, but the whole thing seems pointlessly messy to me.
Instead, I decided to build my own mental model, and assume that whatever the books have is just waffle disguising it.
Taking the derivative of a function is basically identifying the linear transform which it's locally like. In Euclidean space, taking the derivative of an R^n -> R^m gives you the information about a R^n -> R^m linear transform. However, for a manifold, the space you view the derivative in (a vector space) may be rather different from the space you started in (something curvy). This space for derivatives is the tangent space. The tangent space for the reals is the reals, so if you have a manifold M with a tangent space T, taking the derivative goes from M -> R to T -> R.
Thanks to duals, this derivative can also be represented as a member of T. We can then view the dx, dy etc. as basis elements of the tangent space, and dP as an element of the tangent space representing the derivative.
The tangent space allows you to tack directions onto each point. In other words, a vector field can be viewed as supplying an element of T for each point in M. This gives us a way of representing E. Happily enough, dP . E gives us the thing we were after.
Overall, I think I prefer defining dP as a function of type T -> R. That way, the covariant scaling effects are made clearer, and you can probably get away without a dot product (haven't thought hard about that second part). If you change the scaling on your parameterisation of the manifold, when your vector field entries increase in measured length, the derivatives (as a function from the tangent space to reals) also decreases proportionally, so that the final number you get stays the same.
I'm sure I've swept a lot of clever and subtle issues under the carpet here. However, I feel the textbooks' approach does too. Until I can get a handle on an alternative well-typed interpretation of these things, this will do nicely for me.
Update: I underestimated Penrose. In a later chapter he clarifies his handwavey notation. I have been using the dual of the normal convention. The vector field assigns each point an entry in its tangent space. Interpreting this vector space as the space of functions taking derivatives in all directions gives us the 'vector field as differential operator' view of the world. Treating it as a vector space, the dual space of functions from the tangent space to the reals is the differential 1-forms, aka covectors. The 'dot product' we see is really just applying one to the other, and we end up with the contravariance between the two as needed.
Penrose's rather nice geometric intuition of this is that vectors work as you expect - their length is as you understand, but covectors represent density. 'dx' can be thought of as the hyperplane of points perpendicular to the direction x, and represents a density of stuff in that direction. I'm still working through this chapter, but I expect this will fit quite nicely with integration...