Every now and then software from the 2010s or 2000s or 1990s is requested.
The original hosts for the software project departed.
The repositories disappeared.
The tarballs were misplaced.
Excuses excuses.
It can not all be found.
Sorry.
But recently some ISO9660s for Sorcerer and
a previously unpublished POSIX were found.
Whether the Sorcerer ISO9660 images were the last
rolled or sometime near is uncertain.
The unpublished Pellucid POSIX can be test deployed,
but the tools and software catalog were not included.
And that ascript version no longer exists.
All ascript versions earlier than 202001
were deprecated and discarded.
The grammar is not compatible.
Had illness not struck.
Had more persons helped and less persons hampered.
Then Pellucid POSIX would have been developed and
on current ascript available and deployed.
Too ill to do everything Sorcerer slipped away.
Pekka Panula's hosting which lasted longer than
than kublai.com, savannah, berlios.de, ibiblio.org combined ended.
Why? Who knows. It was generous.
Was probably the correct decision.
The Sorcerer POSIX to maintain the required health never returned.
What could have became, but never did, please contemplate and enjoy.
Others if helping then a different outcome would have become.
Many treasures will not be created will not be shared.
Ever wonder how many treasures never became,
because a wage slave was not commanded or because
a free mind choose otherwise or was homeless or dying?
https://drive.google.com/drive/folders/1pXUmkjun5PKwQldcSRzHZU8Zo-rAb-8v?usp=sharing
Above again is the URL for shared directory google drive access.
Request whatever is not there.
But not everything requested can be found.
The ISO9660 file over http or https
fuse mountable file system was written.
It worked, not very fast.
But where it exists now, or even the name
is no longer recollected.
C + curl + fuse and done.
Not much duration or effort should be required.
Good luck.
If trying mmaping enough space for the entire file
and then using function mincore to check
if the data was already downloaded seems prudent.
To download in advance and probably fill at least 4K pages
if not more for every curl request seems prudent.
Even then the performance might still seem a little slow.
In the past 650M or more in RAM + swap
if retained would not be desired.
On modern computers a better faster approach is possible.