The (ice-9 futures)
module provides futures, a construct
for fine-grain parallelism. A future is a wrapper around an expression
whose computation may occur in parallel with the code of the calling
thread, and possibly in parallel with other futures. Like promises,
futures are essentially proxies that can be queried to obtain the value
of the enclosed expression:
(touch (future (+ 2 3))) ⇒ 5
However, unlike promises, the expression associated with a future may be evaluated on another CPU core, should one be available. This supports fine-grain parallelism, because even relatively small computations can be embedded in futures. Consider this sequential code:
(define (find-prime lst1 lst2) (or (find prime? lst1) (find prime? lst2)))
The two arms of or
are potentially computation-intensive. They
are independent of one another, yet, they are evaluated sequentially
when the first one returns #f
. Using futures, one could rewrite
it like this:
(define (find-prime lst1 lst2) (let ((f (future (find prime? lst2)))) (or (find prime? lst1) (touch f))))
This preserves the semantics of find-prime
. On a multi-core
machine, though, the computation of (find prime? lst2)
may be
done in parallel with that of the other find
call, which can
reduce the execution time of find-prime
.
Futures may be nested: a future can itself spawn and then touch
other futures, leading to a directed acyclic graph of futures. Using
this facility, a parallel map
procedure can be defined along
these lines:
(use-modules (ice-9 futures) (ice-9 match)) (define (par-map proc lst) (match lst (() '()) ((head tail ...) (let ((tail (future (par-map proc tail))) (head (proc head))) (cons head (touch tail))))))
Note that futures are intended for the evaluation of purely functional expressions. Expressions that have side-effects or rely on I/O may require additional care, such as explicit synchronization (see Mutexes and Condition Variables).
Guile’s futures are implemented on top of POSIX threads
(see Threads). Internally, a fixed-size pool of threads is used to
evaluate futures, such that offloading the evaluation of an expression
to another thread doesn’t incur thread creation costs. By default, the
pool contains one thread per available CPU core, minus one, to account
for the main thread. The number of available CPU cores is determined
using current-processor-count
(see Processes).
When a thread touches a future that has not completed yet, it processes
any pending future while waiting for it to complete, or just waits if
there are no pending futures. When touch
is called from within a
future, the execution of the calling future is suspended, allowing its
host thread to process other futures, and resumed when the touched
future has completed. This suspend/resume is achieved by capturing the
calling future’s continuation, and later reinstating it (see delimited continuations).
Return a future for expression exp. This is equivalent to:
(make-future (lambda () exp))
Return a future for thunk, a zero-argument procedure.
This procedure returns immediately. Execution of thunk may begin
in parallel with the calling thread’s computations, if idle CPU cores
are available, or it may start when touch
is invoked on the
returned future.
If the execution of thunk throws an exception, that exception will
be re-thrown when touch
is invoked on the returned future.
Return #t
if obj is a future.
Return the result of the expression embedded in future f.
If the result was already computed in parallel, touch
returns
instantaneously. Otherwise, it waits for the computation to complete,
if it already started, or initiates it. In the former case, the calling
thread may process other futures in the meantime.