Arc Forumnew | comments | leaders | submitlogin
Packaging libraries
11 points by stefano 5663 days ago | 48 comments
I've added to Anarki a library to package libraries (file pack.arc). Basically, it lets you put all the files of your library in a directory that can be placed wherever you like and easily loaded. You can see the comments at the beginning of the file for a little example. I will add support for automatic dependencies loading in the near future.

With this loading a library should be as easy as typing

  > (use-pack 'http-get)


1 point by almkglor 5649 days ago | link

Hmm. Would you mind giving a more overviewish summary of the library-packing system? You seem to have stuff like "defproject" in pack.arc ; what are the relationships between projects, libraries, and packages in your scheme?

Currently the (using ...) scheme in Arc-F assumes a package is a file is a library (and if the library is too large to fit in a single file, to have various "support packages" which are related to the main library package: i.e. lib.arc source contains '(using <lib/part1>v1), and there's a file in lib/part1.arc which contains the parts of the library.). I'm interested in your take on multi-file libraries.

Also, my scheme assumes one context object per source file. A context object is equivalent to the "current-package" variable ( s/"/* ) in CL, except you can arbitrarily create such objects, and they work monadically. What is your expectation on how the packaged libraries work?

-----

1 point by stefano 5647 days ago | link

> if the library is too large to fit in a single file, to have various "support packages" which are related to the main library package

No more. For simplicity now everything (even single files) are packaged in directories. A "library" (or "package") is a way to distribute some piece of code: everything is placed within a directory named libname.pack together with informations to load dependencies. You can load such a package and all its dependencies using a single command ('use, 'require or 'using). A "project" is a way to manage the development of such a library and is roughly a "make" for Arc. To define a project you would use the macro 'defproject, specifying a list of dependencies (single files or libraries) and a list of files composing the project. When loading a project through 'proj-load only modified files are loaded. When you have loaded a project you can build a packaged library using 'deliver-library. A proper directory will be created and populated with the relevant files. Project and libraries are independent of the namespace system. I've modified 'using to load a library using 'use if nothing else can be done, with the assumption that if such a library exists, it also defines its own namespace (e.g. (using <http-get>v1) loads the library named 'http-get, hoping that that library defines a namespace <http-get> with an interface v1.

-----

1 point by almkglor 5647 days ago | link

In the Arc-F packages/namespaces, dependencies are specified by the (using ...) metaform. Each package is conceptually a point where a library could be; if a library needs several files, each file is a package and there is a unifying file which depends on the other packages and specifies the interface.

I suppose my view of namespace-as-package is from the point of view that namespaces are the be-all and end-all of organizing libraries though. Hmm. I'll try a look-see at your pack.arc, although I think it would be nice if you could provide a few simple examples.

-----

1 point by stefano 5645 days ago | link

> dependencies are specified by the (using ...) metaform

Mmmm... I've been thinking about this. There seems to be some overlapping between your system and mine. I think pack.arc is better suited to extend a pure namespace system with a packaging/dependency system, whereas arc-f namespace system is meant to manage also dependencies.

To see a real usage of pack.arc you can look at my ftp library: http://github.com/stefano/ftp-client/tree/master It's not updated to work with arc-f (yet) but it shows how to use pack.arc: a proj.arc file for development and a automatically generated directory ftp-client.pack intended to be downloaded and copied within the search path by the end user.

-----

1 point by almkglor 5645 days ago | link

Yes. In fact I kind of rushed Arc-F a bit because I suspected that your pack.arc would overlap with the packages system in Arc-F, so I wanted to see what we could work out in order to reduce overlapping in this case.

Hmm. Project management? The Arc-F namespace system doesn't handle thinking in terms of "projects". And how about versioning? The "version" that is used in the Arc-F namespaces is more about the interface version, not necessarily the version of the actual library.

Also, I was also thinking that potentially a particular library may have multiple implementations, while those implementations share the same interface. For example, there is a reference implementation for vectors in lib/vector.arc, and (using <vector>v1) will acquire that vector interface. However, a different implementation of Arc-F - for example, arc2c - might provide a different version - in the case of arc2c, it might be a thin wrapper around a C array.

-----

1 point by stefano 5645 days ago | link

> how about versioning?

I will probably add it to pack.arc in the future. For the moment it wouldn't be very useful, since there are so few libraries and the ones that do exist are in early development stage. I'll put in pack.arc what I need now to help me develop and distribute libraries. Suggestions are always welcome of course.

The main overlapping between the two system seems to be the fact that 'using tries to load a file before importing its interface. I don't think this will create any conflict with pack.arc. I should also change its name: the term "package" is used both for "collection of files" (pack.arc) and "namespace" (arc-f).

Your example about arc2c arises a problem that Arc, in its original conception, wanted to solve: multiple implementations. Small little differences between implementations always end up hurting: for example ftp-client works with Anarki and doesn't work for Arc2 or Arc-F. There are too much implementation of Arc right now. I have nothing against arc-f, snap, arc2c, rainbow,... I like their existence and what they added (in particular arc-f), but a canonical implementation used by 99% of Arc developers should exist, and it should be "fast enough" (2x slower than python is my personal limit).

-----

1 point by almkglor 5644 days ago | link

How about project-based development then? Basically to keep related files together.

Personally I prefer a variety of smaller libraries whose components would then be composed by other libraries (which would end up being small too, because the functionality exists in other libraries).

My main design goal in Arc-F is to make the use of libraries - and in particular, the use of different libraries from different people with different design philosophies - as smooth as possible. Many of the additions in Arc-F (the ones that aren't packages) are actually subtly biased towards that main goal.

> (2x slower than python is my personal limit).

Hehehe. Looks like I'll need to start doing some teh lutimate leet hackage in the function dispatching code... Or alternatively start considering how to write an interpreter from scratch (which is a subgoal of SNAP, too) ^^

-----

1 point by stefano 5644 days ago | link

> How about project-based development then?

I don't quite understand that. With pack.arc development is project-based. Maybe we have different opinions of what a "project" is. To me, it is a directory with a file proj.arc describing the structure of the project (a 'defproject declaration).

> write an interpreter from scratch

Really difficult but really needed. The mzscheme dependency is quite big compared to how small as a language Arc is. The main efficency problem, as you said, is function dispatch, because we have to check if it is a function, a list, etc. One thing I don't like very much about SNAP is the dependency on the boost libraries: it is a huge dependency. Is it really needed?

Another problem with an interpreter from scratch is the GC: it is very difficult and time consuming to write an efficient, concurrent and stable GC. A good solution would be to use the Boehm-Weiser GC: it is easy to integrate in any interpreter (I don't know if it works with SNAP's process' model, though) and it is a really good GC. Even the mono project and gcj use it.

-----

2 points by almkglor 5643 days ago | link

> With pack.arc development is project-based.

Ah, right. Of course, that's why there's 'defproject, right?

> because we have to check if it is a function, a list, etc

As an idea: generally writes to global variables are much rarer than reads from global variables; in fact, practically speaking nearly every global variable is going to be a constant. We could move the cost of checking if a call is a function, a fake arc-f function, or a data structure to the writing of global variables rather than the read.

Basically calls where the expression in function position is a reference to a global variable are transformed to callsites which monitor that global. The callsite initially determines the type of the value in the global (or creates an error-throwing lambda if the global is still unbound) and determines the proper function to perform for that call (normal function call, or a list lookup, or a table lookup, etc). The callsite also registers its presence to the global.

If the global is written, the global also notifies all living callsites (we thus need weak references for this), which will then update themselves with the new value.

This is actually "for-free" in SNAP, because there's an overhead in reading globals (copying from the global memory-space to the process memory-space), and SNAP thus needs to monitor writes to globals so it can cache reads.

> One thing I don't like very much about SNAP is the dependency on the boost libraries: it is a huge dependency. Is it really needed?

The bits of boost I've used so far are mostly the really, really good smart pointers; while I've built toy smart pointer classes I'm not sure I'd want those toys in a serious project. Also, I intend to use boost for portable mutexes. Now if only boost had decent portable asynchronous I/O...

Alternatively we could wait a bit for C++0x, which will have decent smart pointers which I believe are based on boost.

> Boehm-Weiser GC: it is easy to integrate in any interpreter (I don't know if it works with SNAP's process' model, though)

Well, one advantage of the process-local model is that process-local memory allocations won't get any additional overhead when the interpreter is multithreaded; AFAIK any malloc() drop-in replacement either needs to be protected by locks in a multithreaded environment, or will do some sort of internal synchronization anyway. In effect we have one memory pool per process, allocating large amounts of memory from the system and splitting it up according to then needs of the process.

Since processes aren't supposed to refer to other process's memory, the Boehm-Weiser GC won't have anything to actually trace across allocated memory areas anyway.

And I probably should start using tagged pointers instead of ordinary pointers ^^. They're even implementable as a C++ class wrapping a union.

In any case a copying algorithm already exists because we need to copy messages across processes anyway: minor changes are necessary to extend it to a copying GC.

-----

1 point by almkglor 5656 days ago | link

'int->str is buggy:

  arc> (int->str 1 10)
  "00000000001"
  arc> (int->str 10 10)
  "00000000010"
  arc> (int->str 11 10)
  "00000000011"
  arc> (int->str 100 10)
  "100"

-----

1 point by stefano 5656 days ago | link

It's a problem with the math. I use (quotient n 10) where I should use (+ 1 (coerce (log 10 n) 'int)), where 10 is the base of the logarithm. I'll have also to add 'log to ac.scm, because it's not there...

-----

2 points by almkglor 5655 days ago | link

Actually you could have used this arc.arc function:

  (def pad (val digits (o char #\ ))
    (= val (string val))
    (string (n-of (- digits (len val)) char) val))

Obviously arc.arc itself needs some serious documentation T.T

-----

2 points by stefano 5655 days ago | link

> Obviously arc.arc itself needs some serious documentation

I completely agree. It's not the first time I write a function and then discover that it is already there. I could just read all the Arc source code and try to remember everything, but I don't have enough time. What about a book on Arc written by the community?

-----

3 points by stefano 5655 days ago | link

Created: http://github.com/stefano/arc-book/tree/master

I'll give write access to anyone who wants to contribute with some piece of documentation.

-----

2 points by skenney26 5654 days ago | link

I'd be interested in contributing. Perhaps we should discuss possible topics for the first few chapters.

-----

2 points by stefano 5654 days ago | link

I've just added you as a contributor, you should now be able to push things. The book should look as a natural continuation of the Arc tutorial. The book should target Anarki, not Arc2. A nice first chapter would be an overview of the most basic Arc macros and functions together with some examples. A chapter on system administration scripts with Arc could also be a good introductory chapter. I would leave web application development as one the last chapters. A chapter on the available third-party libraries could be a good one.

-----

1 point by almkglor 5651 days ago | link

Would you be interested in having an "Arc-F" section?

I wrote what was supposed to be a rationale for the scanner abstraction, but it ended up looking more like a tutorial, LOL.

-----

1 point by stefano 5651 days ago | link

I was in fact thinking about a small section on arc-f installation (when I find the time) :). I'll give you access to the repository as soon as possible, I have to run now.

-----

1 point by stefano 5651 days ago | link

Ok, now you have read/write access to the repository: http://github.com/stefano/arc-book/tree/master

-----

1 point by almkglor 5647 days ago | link

@stefano:

Do you mind if I put a simplified BSD license on the Arc-F section of the arc-book?

Do you have any preferences for licensing?

-----

2 points by stefano 5645 days ago | link

The more permissive the license, the better. BSD should be fine.

-----

1 point by rincewind 5655 days ago | link

What do you have in mind?

"The Arc Cookbook", "Practical Arc"?

TeX, HTML, Markdown?

-----

2 points by stefano 5655 days ago | link

I don't have anything particular in mind. Everything that classifies as "documentation" is good.

> TeX, HTML, Markdown?

I think TeX would be a good choice.

-----

2 points by almkglor 5654 days ago | link

> I could just read all the Arc source code and try to remember everything, but I don't have enough time.

This is what I do every week.

LOL.

I need a girlfriend wahahahahahaha

As an aside, I've been forking Arc to include the symbol-based packages, and the 'interface thing is a rather nice documentation in and of itself: it's kinda like a summary of the functions available, sort of like a "table of contents".

-----

2 points by stefano 5654 days ago | link

I'm looking forward to see your fork released! Have you introduced any relevant incompatibility with Anarki?

-----

5 points by almkglor 5654 days ago | link

Yes, off the top of my head: 'each on tables gives you the cons pair (k . v), instead of just values. passing strings to 'map for third->inf arguments no longer infects the output type; rather, the output type is always the type of the second argument.

t and nil are now boolean types, not symbols. Sorry. It's hard to reason about them with the newfangled type-based methods if they're symbol types instead of booleans.

Also, because of contexting, asv.arc will have to be rewritten so that it unpackages symbols in (defop ...) and related forms. If it's not rewritten, you'll have to use example.com/%3CUser%3Cyour-page instead of example.com/your-page . I think most asv-based applications will work properly if asv.arc is modded thus.

Also, most of your source files will probably need something like:

  (in-package my-file)
  (using <arc>v3)
You can leave that out but that means you'd be putting stuff in the <User> package, which is supposedly for exploratory scratch purposes (if your source is scratch, well then leave it out by all means ^^).

Note that <arc>v3 does not even include all arc functions (for example <arc>v3-thread, which includes the threading stuff). There are some rationales for not including everything in <arc>v3, most notably for bits and pieces that are reasonably hard for other implementations, such as threads.

Also, the fork is somewhat slower, especially for function calls T.T . So far I haven't experienced particularly serious slowdowns (due to some serious crazy optimizations in ac.scm that were mostly inspired by my experimentations with implementing the SNAP interpreter).

However the fork does have nice advantages. For example, it has generic functions with methods. So far it's just the ordinary dispatch-on-first-argument-type but I hope to expand that in the future to create full-blown generic functions with dispatch-on-any-combination-of-argument-type.

Also, it's now much easier to integrate your own types into the base arc.arc . For example you can create your own sequence types which will work with 'map, 'join etc. Those functions will even return your type, i.e. map on your type will return the same type. Generic programming FTW!

And of course, packages, whose integration was the biggest headache in the fork T.T . Largely because I decided that everything that gets passed to the compiler should already be in packages; this includes fn -> <axiom>fn etc.

It can be launched from any directory, and will remain in that directory when you program, unlike Anarki which always cd's to the installation directory (but it will auto-search the installation directory for any 'load calls you make)

Slated for future is also a way to define ssyntax that works only within the scope of a package, meaning using your own ssyntax in one package won't affect another package.

-----

2 points by stefano 5653 days ago | link

I thought it was just about packages, but it is a much larger change! Well done! We really needed something new. I'll now look into integrating pack.arc with your package system.

-----

1 point by almkglor 5652 days ago | link

Thanks. You may wish to look through lines 381-383ish on ac.scm, which is the bit which calls 'require when you're trying to (using ...) some package.

I think pack.arc is nice enough to integrate into ac.scm and arc.arc (at the very least the package loading part; you can probably still retain the bit that creates packages into pack.arc, since it'll probably be used rarely enough)

Also, it might make sense to use the same context object for each file in a pack.

-----

1 point by stefano 5652 days ago | link

> you can probably still retain the bit that creates packages into pack.arc

Yes. I've written that part (defproject and related functions) as a base for a not-yet-written Arc IDE.

-----

1 point by bOR_ 5653 days ago | link

  "t and nil are now boolean types, not symbols. Sorry. It's hard to reason about them with the newfangled type-based methods if they're symbol types instead of booleans."
It's a bit hard for me to understand the consequences of this, and as you seem to be sorry about the change, I assume there are some consequences worth noting. Could you elaborate?

-----

1 point by almkglor 5653 days ago | link

It has to do with overloading 'car and 'cdr.

'car and 'cdr are now overloadable - see for example lazy-scanner.arc in arc-f/lib .

However classically (car nil) => nil and (cdr nil) => nil.

So basically we have something like this:

  (def car (x)
    (err "can't apply 'car to object" x))

  (defm car ((t x cons))
    ...some scheme-side code....)

  (defm car ((t x bool))
    (if x
        (err "attempt to scan t")
        nil))
Of course we could probably still retain t and nil as symbols. However the user might want to define iterating over symbols for some arcane purpose (exploratory, exploratory...). If 'nil is a symbol, the user has to specifically check for the nil symbol when overloading 'car for symbols.

The reason I'm sorry is really because I'm not 100% sure it's the Right Thing (TM), and because I really had to go on with Arc-F instead of dithering over 'nil.

-----

1 point by cchooper 5652 days ago | link

Of course, you still have the problem that nil is both a boolean and list :)

-----

2 points by almkglor 5652 days ago | link

Ah ah ah.... no! The cute thing is that a 'bool presents the "scanner abstraction". Basically, a scanner is anything that overloads the functions 'car, 'cdr, 'scanner, and 'unscan. Thus, you don't have to check for the type of an object: you just need to pass it through 'scanner. If 'scanner throws, it's not a "list". If 'scanner returns a value, you can be sure that it returns a value that will be legitimately passed to 'car and 'cdr.

From this point of view, a "list" is anything that happens to be a scanner.

So nil is a list. So are 'cons cells. So are anything that overloads 'scanner, 'unscan, 'car, and 'cdr.

Try:

  (using <lazy-scanner>v1)
...then play around with (lazy-scanner a d)

Or for a bit of ease of use generate an infinite list of positive integers (after doing the 'using thing):

  (generate [+ _ 1] 1)
Edit: to summarize: a list is not a cons cell. A cons cell is a list. ^^

-----

1 point by cchooper 5652 days ago | link

Ah...but...what if you want to define a generic function that operates differently on lists and bools (i.e. not a scanner, but a general generic function). I haven't had a close look at Arc-3F yet, so maybe I need to play around a bit more to uderstand what you're saying :)

-----

1 point by almkglor 5652 days ago | link

Well, a "list" is a "scanner". So your "not a scanner" doesn't make sense, at least from the point of view of Arc-F.

However if you mean "list" as in sequence of cons cells:

  (def works-on-cons-cells-and-bools (x)
    (err "this works only on cons cells and bools!"))
  (defm works-on-cons-cells-and-bools ((t x cons))
    (work-on-cons-cells x))
  (defm works-on-cons-cells-and-bools ((t x bool))
    (work-on-bool x))
Note that you can even define a unifying "type class" function which ensures that the given data is a cons cell or a bool, or is convertible to one (i.e. an analog to 'scanner). For example, you might want a "hooper" type class:

  (def hooper (x)
    (err "Not convertible to a bool or cons cell" x))
  (defm hooper ((t x cons))
    x)
  (defm hooper ((t x bool))
    x)
Then you can convert works-on-cons-cells-and-bools with the type class:

  (def work-on-hooper (x)
    (works-on-cons-cells-and-bools (hooper x)))
Then, someone can make a type which supports the "hooper" type class by either overloading hooper (and returning a true hooper), or overloading hooper and works-on-cons-cells-and-bools:

choice one:

  (defm hooper ((t x my-type))
    (convert-my-type-to-cons x))
choice two:

  (defm hooper ((t x my-type))
    x)
  (defm works-on-cons-cells-and-bools ((t x my-type))
    (work-on-my-type x))

-----

1 point by bOR_ 5652 days ago | link

in reply to the summary: nice.. some day I'll have to look into the bellow of the beast. Arc3F no longer has lists build up from cons?, or lists are build from cons which are a special kind of lists.

-----

2 points by almkglor 5652 days ago | link

Well, I prefer to think of cons-as-lists as one implementation of lists. It's possible to define alternative implementations of lists; all that is necessary is to define the overloads for 'car, 'cdr, 'scanner, and 'unscan. With generic functions in Arc-F, that is enough to iterate, cut, search, filter, join, map, and more on any list, regardless of whether it's made of 'cons cells or 'lazy-scanner objects.

-----

2 points by drcode 5654 days ago | link

arcfn.org is a good stopgap

http://arcfn.com/doc/fnindex.html

-----

1 point by almkglor 5654 days ago | link

True; high quality documentation, too.

It's not being updated, but then. neither is Arc ^^

-----

1 point by stefano 5656 days ago | link

Fixed.

-----

2 points by almkglor 5662 days ago | link

How about integrating 'require and/or load with this? BTW good work!

-----

1 point by stefano 5662 days ago | link

Seems a good idea. When require sees a string it would have its standard behavior, and when it sees a symbol it loads it as a packaged library. I'll add it. I'd also like to integrate it with a namespace system, but there is none yet. You've been working on a system based on symbols, right? What's its current state? Is it usable?

-----

2 points by almkglor 5661 days ago | link

It'll end up forking Arc I'm afraid: I'm doing the system in the scheme side. I've gotten the translation up, but I'm still testing it. ^^

-----

3 points by stefano 5661 days ago | link

> It'll end up forking Arc

This is becoming unavoidable. This is really a pity, but the official Arc development is simply too slow (assuming it is moving, a quite optimistic assumption).

-----

1 point by almkglor 5657 days ago | link

Integrating it into my forked Arc, but sadly something is interacting badly with the compiler and some macro is getting stuck in an infinite loop (which, if I don't catch with a ^C early enough, will force me to actually restart my computer T.T), probably from an incorrect compile. I'm debugging it currently >.<

-----

2 points by almkglor 5656 days ago | link

Debugged! Now I just need to rewrite bits and pieces of arc.arc to clean it up and make it follow a more "axiomatic" path that will allow you to do neat things, like define how (+ (your-type 1 2) 42) will be (and adding hacks into ac.scm so that the axiomatic path isn't so slow).

-----

1 point by stefano 5661 days ago | link

Integration done and on Anarki. Now (require 'mylib) loads 'mylib and (require "file.arc") loads, as usual, "file.arc".

-----

1 point by drcode 5663 days ago | link

I like this idea- good work!

-----