Monday, December 26, 2011

Showing Thrists Revisited

More than three years ago I lamented that it's impossible to to define a show function on thrists, generally, even when the element type is member of the Show class.

In the meantime I succeeded to declare certain thrist parametrizations and convince GHC to accept show instances for them. The Appli thrist is such an example.

With the upcoming GHC 7.4 things may improve considerably, as it will bring constraint kinds. Remember, thrists are (currently) parameterized like this (* → * → *) → * → * → *, i.e. on types, and with the new kind variable feature we can probably generalize to user-defined kinds. Then the parameters may match Ωmega's: (a → a → *) → a → a → *. So we still result in a type, but we can choose our indices from a vastly bigger domain. Enter constraint kinds! These may be part of our user-defined kinds, so we can form stuff like (Show, *) and supply it for the parameter a. With some luck deriving Show can be more successful and not attached to particular parametrizations.

I regrettably still haven't gotten around building a GHC v7.4.1 candidate, so I cannot verify the above, but my gut feeling is that this'll work out...

Tuesday, November 22, 2011

Pondering about the Foundations

As I am thinking about how a possible Ωmega 2.0 could be implemented, I am trying to come up with some interesting concepts w.r.t. parsing and type checking. Since no single line is written yet, it is easy and cheap to ponder.

Some key elements are already taking shape in my mind. These are:
  • parsed entities carry their start and end coordinates in their types
  • there is no difference between patterns and expressions
  • each expression carries a typing thrist.
The second item has been elaborated by me in an earlier post. The third must wait a bit until I have something coherent (but the curious reader might visit my sketch of ideas).

I'd like to talk about item one here. The essence is that every grammar element obtains a type that describes where this element is located. For example a function declaration may occupy the text portion between start and end, where these are type-level triples (file, line, column). In order to consider parsed input consistent, the start coordinates must be aligned with the prior element's end coordinates, i.e. we have another thrist here.

Things get interesting when we model naming and scope. Name definition and referral must somehow match the types, after the match is done the semantics can be read off the user. To give an example:
Ref definer :: Exp (`foo, 42, 7) (`foo, 42, 11)
  where definer :: Exp (`foo, 41, 1) (`foo, 45, 37) = ...
Both the reference and the definition live in the same file "foo.prg". The former is on line 42 from column 7 to 11 (exclusive). The definition it refers to is in fact enclosing it (a recursive call, maybe?) extending from line 42 to 45. With this representation the name of the indentifier becomes secondary, all that counts is the evidence in definer which contributes to its type! We have created a globally nameless representation.

We can expand this world view to also include the level of the grammatical entity (value, type, kind, etc.) as a fourth coordinate. So a reference to a ›25‹ somewhere in the text at floor 0 would point to an integer literal, the same place at floor 1 to the type Int and at floor 2 it would be the kind *.

My hope is that all the proof obligations arising in such a scheme can be cleverly extracted from a parsec-like engine.

Wednesday, October 5, 2011

Macs and me

I am profoundly saddened since I woke up at 4:00am in the morning, and saw the news headline about the passing of Steve Jobs. I have seen this coming for a long time, as my father died in a very similar way back 1993 with only 52. Looking at the thin appearance of Jobs in the last month or even years I constantly get reminded of him. Basically the same story of suffering, loss of weight, liver transplant (in vain), death. RIP, Dad, RIP, Steve.

I am writing this on a rusty vintage 2000 PowerBook G4 Titanium, I bought on eBay last year, because the video of my own 2001 TiBook went black. By today's web standards completely inadequate, it serves me well for news reading, terminal logins, etc. My son Pedro got his MacBook Pro 15'' delivered just today. An awesome piece of technology.

My father bought the first Mac in 1986, just after opening his practice as a neurologist. This was two years after cutting all strings in Hungary and fleeing to Germany in a pretty bold move. Must have been a moment of total self-overestimation when I promised to my dad "if you buy that Mac Plus I'll write you the best software for it for your doctor's office". A crazy time began. At day the Mac was used to keep patient's data with a DTP program "RagTime", at 5pm I hauled the Mac home (in a big black bag) and started writing the program. Sometimes deep into the night. I used Turbo Pascal (and later MPW) after figuring out that the Lisp environment I preferred simply did not cut it due to insufficient support of the Toolbox. In the morning my father carried the Mac back and powered it up.

Less than year later the program was ready for productive work. A Mac SE joined the party and we had a networked doctor's application with a really neat windowed user interface, that would put even today's programs to shame in this regard.

There was even a time when we fancied marketing this product, but my university duties and the early death of my father simply negated all plans to this end.

When I had my diploma in my hands I picked up the phone and called the guy who sold us the Mac Plus and a copy of "Inside Macintosh" back in '86. In the meantime he founded a pretty successful company around a networked admin solution called 'netOctopus' which was his baby. We occasionally met at Apple developer events and I new that he was a pretty damn good coder. He hired me and I was earning money by programming Macs!

So yes, I love Macs and there is no reason that this will change in the foreseeable future.

I kept telling to myself, should Jobs die one day, I'll put that Mac Plus (now in my basement and still functional) up for sale at eBay. My thought today: "screw it – too many fond memories attached".

Friday, September 9, 2011

GHC 7.0.4 on CentOS

This is mainly a post for myself so that I can look back any time how I did it.

Basically I fell in every possible hole while wandering around in the dark but I could get out of them :-)
  • after downloading I configured the binary installation for x86-64. configure starts ghc-pwd, which is built with the wrong glibc-versions. Bummer.
  • found out that I have to get the source install. Downloaded and configured, it died in the middle because base-4.2 was not available. Bummer.
  • so I had to install ghc-6.12.3 first. Fortunately this went smoothly (binary install). With this new friend I reconfigured v7.0.4 and started building. When linking the stage1 compiler I got an error from binutils that some relocation went awry. Bummer.
  • the internets are full of hints that the system compiler (gcc-4.1) uses the bad linker. To my luck I found a new gcc-4.4 in an amicable colleague's home directory, so I reconfigured thus:
    ./configure --prefix=/home/gabor --with-gcc=/home/msichel/local_x86/bin/gcc
  • but configure told me that the shiny new gcc cannot build executables. Bummer.
  • it turned out (I had to write a hello-world to figure that out) that some gcc-internal shared libraries were not found. Luckily they came into scope with
    LD_LIBRARY_PATH=/home/msichel/local_x86/lib:/opt/lsf/6.2/linux2.6-glibc2.3-x86_64/lib
  • at this point I could smell victory, but it was still out of reach. A blog post suggested to cp mk/build.mk.sample mk/build.mk and edit it in two ways:
    • uncomment BuildFlavour = quick
    • and change to GhcLibWays = v p dyn, i.e. append "p dyn"
  • I did both, reconfigured and typed make install.
It went through without a hitch :-)

    Sunday, August 21, 2011

    Poor Man's Haskell Kinds

    After finalizing some of my idea prototypes in Ωmega, I intend to port a part of the code to Haskell, to enjoy the speed advantages of a compiled language.

    Unfortunately Haskell does not support user-defined kinds, so I was wondering how to simulate them at least, in order to get some confidence that my type-level constructs do not drift into no-no-land.

    This literate haskell file demonstrates how I put instance constraints to work, and thereby define the inference rules for well-kinded phantom type indexes. I am sure somebody has already done this, I would like to hear about it. Anyway, this almost allows me to introduce Ωmega's checked singleton types in Haskell.

    Sadly GHC does not yet allow instance definitions for type families, but it would be cool if somebody could provide me a workaround.

    Thursday, February 10, 2011

    My Thoughts about Inertia

    Big caveat: I am very much of a layman when it comes to physics and this is consequence of having heard elementary physics lectures more than 15 years ago.

    But I like the occasional book on the subject (The Elegant Universe comes to my mind) and am fascinated by the theoretical possibility of a grand unified theory of Nature's forces.

    There is a small niggling idea in my head that wants to get out from time to time, and this post is the evidence that it finally made it into the wild. (I will probably appear as a bloody idiot because of this...)

    Being Hungarian-born has made me somewhat receptive for advances in physics performed by the pretty smart folks coming from this small country, and already as a pupil I was taught that the "equivalence principle" (of gravitational mass and inertial mass coinciding to an amazing degree) was shown by Eötvös. This cannot be a simple coincidence but must be a fundamental thing.

    A completely ununderstood fundamental thing, as it appears to me. A quick googling for "inertia quantum gravity" brings up some outright lunatics and some rather exotic theory of "zero field fluctuations" causing inertia:

    http://www.calphysics.org/articles/PRA94.pdf
    http://www.calphysics.org/articles/gravity_arxiv.pdf

    But, electromagnetism causing the same mass as gravity (i.e. linking two of the fundamental forces with an unbelievable precision) appears very far-fetched to me.

    Wikipedia seems to be no help when inquiring for the source of inertia. Mach's principle referenced therein, namely that the existence of the rest of the universe leads to the phenomenon of inertia does not sound right; after all, when all the other stars (galaxies etc.) would be removed but the Sun, would you expect that the inertia of your body suddenly would fall to almost zero?

    I cannot believe that modern theories of gravity (such as quantum gravity approaches) do not try to explain the equivalence of gravitational mass and inertial mass. My intuition says that this should be the first step!

    Now I'll begin a steep descent in the land of speculation and respectlessly mix stuff told by others with vague ideas and metaphors of mine. I won't even make effort to cite or link, but at some point I'll tell what my main idea is. Be warned: at that point probably each self-respecting physicist (that is, a person who successfully solved a differential equation about movement of a body under external force) will turn away with disgust, but okay, let it be...

    There is this idea that gravity is so weak in comparison to electromagnetism, because the circular (closed)  strings corresponding to the graviton do not permanently bind to our three dimensional space (or four-dimensional spacetime). They are just passing through, and there is little time for them to establish the force between bodies. So, the gravitational force exerted between two glasses of red wine is very small on each other. But when they are accelerated (say, by clinking them) perceptible forces appear. So I conjecture that both forces are caused by the same mass, and by the same gravitons, but they differ in magnitude, because the gravitons stay around longer in the second case, so as messenger particles they establish a force many orders of magnitude stronger.

    How can they stay around longer to become relevant? Perhaps by traveling orthogonally to the gravitational gravitons. Okay, inertial gravitons travel (more or less) orthogonally to our space, that is, along the time axis and thus following the trajectories of the particles the matter is built of.

    I think it was Feynman who mentioned that an anti-particle traveling 'backwards' in time can be equated with its particle traveling forward in time. Now, what is the anti-graviton? I believe that I read sometime that it is the same as the graviton. Just like a photon is not differing from an anti-photon. So the time inversion does not change any property of the graviton. Good. We have gravitons which make a short visit in our spacetime, and vanish just as fast as they appeared, and there are those which accompany mass-like particles (even when at rest) along their trajectory in time, possibly bouncing back and forth between earlier and later. This constitutes (my IDEA!) a force between a body now and its earlier appearance in spacetime. Of course this is the same as a force between now and the future of the body. Actually there is a force between me and (me one second later). Or ten seconds or even a year!

    Two things become clear now. First we have to perform an integration (over all time destinations) to get the resulting force. And second, the force toward the past will bear a negative sign, so for stationary bodies all inertial forces will cancel out. I think this can be asserted for uniform movement too (i.e. constant velocity vector) but I did not waste any thoughts on this yet. Anyway, this is Newton's first law.

    Time to get to the second one. Here I want to make some further remarks. The inverse-square(-like) law should hold for inertial gravitons too, so that a longer time-span enters into the integral with a smaller effect. Kind of like the gravitational force between objects a light-second apart also decrease substantially. Second, causality is a bit smeared at the very short intervals where the forces are really relevant (Planck time?) so the reality a bit before pulls hard backwards and the reality a bit later pulls hard forward. Both look pretty much the same, differing only in the arrow of the force. I do not know whether this is essential, though.

    Back to Newton's second law. Imagine a stroboscope lighting a scene where a body moves, accelerated by a constant force along its movement direction. When you make a photo (with reasonable exposition time) you'll obtain a picture of several bodies, with the earlier ones nearer together, and the later ones successively more distant. Now let's declare the middle one as being now. Then the prior one is less distant than the next one so the force between the prior and now is bigger than the force between now and next. These do not cancel out any more. This is a consequence of the inverse-square law. Of course the stroboscope should flash with Planck frequency etc. (so that the spacetime distance is small enough), but the principle is clear: We get a resultant force that points backwards, namely the difference between the (bigger) half-integral towards the past and the (smaller) half-integral towards the future.

    All the time I was using the wrong word 'force' for something space-time-like (a vector with 4 components). What we perceive or measure as accelerating force is the projection of that 4 component vector into our 3-dimensional space. So I conjecture that F = ma is the projection of the resultant 4-force which I explained in the last paragraph to our 3 dimensions.

    If this little idea holds any water then it should be possible to calculate the inverse law for the 4-force. I'd bet it is inverse-cubic. If this succeeds, then the next test would be to figure out how the 4-force looks like when a relativistic (say half-lightspeed) particle is accelerated by a 3-force. Clearly the 4-speed rotates out of time, but does the the mass of the particle increase?
    I do not dare to think about whether we can ever get to the point where this idea gets tested in an even more comprehensive setting.

    But it was sure a lovely night when I was drinking that red wine with a good friend from Brasil two days ago, who dared to ask a quantum-mechanical question and received a long story as an answer, unknowingly liberating this small niggling idea from the confines of my head!

    Sunday, January 9, 2011

    Quantifiers: Dark Matter

    Modern cosmology routinely deals with dark matter, a source of gravity necessary to explain certain things. In the last days I discovered a kind of dark matter in Ωmega too, and here is the related story.

    As we know type constructors arise when the data definition is equipped with parameters, like below:
    data Tree a = Tip a | Fork (Tree a) (Tree a)
    a is a type variable and gets instantiated (to a type) when we want the tree to contain actual information.

    Actually I could have written the second recursive occurrence of Tree a in the Fork branch as Tree Char, and the type checker would have accepted it. Why? Because at each instantiation site the type variable a is created fresh, and can be bound to a new expression.

    But what is the kind of a? In Haskell it can only be *, which classifies all types, as the other possible kinds (* → *, etc.) do not classify types. But this kinding relation is not spelled out directly, so let's choose a more explicit definition:
    data Tree :: * → * where ...
    When talking about Tree Integer, then it gets pretty clear that Integer's kind must be * and that the resulting Tree Integer is itself kinded by *.
    But there is a fine distinction between explicit and implicit kinding in Ωmega: in the latter case a is kinded by a unification variable instead of by *. The reason is that Ωmega allows us to define custom kinds, which can classify other type-like things. These are usually used for indexing of type constructors.

    The unification variables that are present, but hidden from the user only become manifest when we have multiple levels of data definitions, with the higher levels indexing the lower ones and some constructor function tries to bind the unification variable to different kinds. Of course that won't work and all you'll see is a very cryptic error message. The smallest example I have encountered is issue 70, which I will present as an example here.

    data Pat :: *3 where
      Q :: Pat ~> Pat

    data Pat' :: Pat ~> *2 where
      Q' :: Pat' n ~> Pat' (Q n)

    data Pat'' :: Pat' p ~> *1 where
      Q'' :: Pat'' n ~> Pat'' (Q' n)

    data Pat''' :: Pat'' p ~> *0 where
      Q :: Pat''' n → Pat''' (Q'' n)
    Look at the last line, it induces these two kind equations:
    n = Pat'' p1
    Q'' n = Pat'' p2
    Each equality at some level induces an equality at one level up. But p1 and p2 are kinded by the same unification variable, k. Let's note the two equations with k in the middle:
    m = k = Q' m
    where m is the kind of n.
    You can see now that m is forced to be equal to Q' m, which is clearly incommensurable.
    The solution is clearly to spell out kinding quantifiers explicitly with type variables. Since these get instantiated to fresh copies on the fly, the above problem does not occur.

    So we arrive at (only showing the last definition for brevity's sake)
    data Pat''' :: forall (o::Pat) (p::Pat' o) . Pat'' p ~> *0 where
      Q :: Pat''' n → Pat''' (Q'' n)
    Black magic? Only when you let the kinds live in the dark!