When I posted my solution to the problem how you can safely register an atom as a process handler, I was into the registration process itself. As it happens, if you want to USE that registered process, the other code I was referring to is simply incorrect: you will not be able to use the atoms you registered. Thanks to Roach for pointing that out.
The following is a complete module to test my approach. The messaging was slightly changed.
First, the sample session:
I recently reported on a successful SlideBlast presentation server setup. Unfortunately, it worked well only on Firefox and Safari, not on the Internet Explorer. The Z-order is treated differently in the Microsoft browser line than e.g. in Firefox. To make it work correctly in IE, the style parameter
z-index:3
was added to the ./deps/nitrogen/src/elements/layout/element_lightbox.erl, so it will look as follows:
We want to start with the clean Ubuntu server to test and then run the SlideBlast, created by Rusty Klophaus.
Yes, there is the SlideBlast.com, but we need to have an instance "at home".
So, let us do it!
We will need the Git,Mercurial to get the codes, and GhostScript and ImageMagick for the SlideBlast. The first lines are needed for Erlang.
Riak needs Erlang at least of version 5.7.4, so we need to compile it from the source:
wget http://erlang.org/download/otp_src_R13B03.tar.gz
tar zxf otp_src_R13B03.tar.gz
cd otp_src_R13B03/
./configure
make
sudo make install
cd ..
As of today, the SlideBlast does not work with Riak 0.7 - we need to roll it back a bit.
Let us take the last SlideBlast (with Nitrogen) and the label "riak-0.6" of Riak:
git clone git://github.com/rklophaus/SlideBlast.git
cd SlideBlast/deps
hg clone http://bitbucket.org/basho/riak
hg update -r riak-0.6
cd ../..
make
./start.sh
Now we would be ready to point your browser to http://localhost:8000 and start working.
Hey! What browser? We have Ubuntu Server, running as a virtual guest. Let us make the app visible: edit the SlideBlast/src/caster_mochiweb_app.erl to look like:
...
%%%Options = [{ip, "127.0.0.1"}, {port, 8000}], %%% Just this one line to update
Options = [{ip, "0.0.0.0"}, {port, 8000}],
...
Now we will re-compile it and try it from the host using the proper IP.
make
./start.sh
From the outer box, we will try it as in our case, http://192.168.1.115:8000.
And it worked!
Your IP address of course will be different.
Word of advice for users: best works on FF, Safari. Chrome and IE show minor glitches, but work too. Best viewed slides have big fonts and little text. The client browser better be open full screen and "f" key is used to use all available browser window space.
I would like to build a dynamic data-driven Web application.
There is a set of data records and I would like full control over its presentation during the run-time, i.e. customized.
I came across the Beebole' PURE Version 2 rendering engine, which was immediately gaining on me after I made some attempts with the first version of Pure. Here is the short demo just to prove the concept:
The complete code is placed at BitBucket to play with.
The idea: I was scrolling through the Beebole examples (selecting jquery there) and the Example #5, "Dynamic Table", had double rendering in it. It was a bit sad when I noticed the columns were pre-defined in the rendering directive and JSON structure was used only for the record rendering:
As you can see, the rendered table will always have name, food and legs columns, and there is no control by JSON over it.
Well, a little upgrade and it all gets resolved.
var ex05 = {
template:'table.partialTable',
data:{
cols:['name', 'food', 'legs'],
animals:[
{name:'bird', food:'seed', legs:2},
{name:'cat', food:'mouse, bird', legs:4},
{name:'dog', food:'bone', legs:4},
{name:'mouse', food:'cheese', legs:4}
]},
directive1:{
'th':{
'col<-cols':{
'.':'col'
}
},
'td':{
'col<-cols':{
'@class':'col'
}
}
},
directive2:{
'tbody tr':{
'animal<-animals':{ // loop over all records
'td': {
'col <- cols': { // take each column in cols
'.': recValue, // td value will be a result of call to recValue
'@class':'col' // optional line - the class of
will be the col value from the loop
}
}
}
// the standard arg has an item property and all the context: arg.animal.item is current external loop value (records)
function recValue(arg){
return arg.animal.item[arg.item];
}
What is going on here?
The external loop, animal <- animals places into the animal object the records one by one. For each record, the internal loop by col <-cols gets created. The recValue function gets called with the argument arg, being an object which contains information on both loops. The arg.item carries the running col value. The arg.animal.item carries current animal value. So what left is to get the right value of the animal property, using the col value - that is what the recValue does.
So, we have the generic HTML and JavaScript code which uses only "cols" and "animals" names, but the stuff inside JSON will determine what the table will display.
[EDIT] The situation with README being out of sync is already fixed. That was fast! Actually after all fixes in README the only bump on the road is to install a correct Erlang version. After that, clone, make, edit config/riak-demo.erlenv and start go very smoothly. Please disregard all "fighting" an "lying" below. It was actual just for me.[EDIT]
I wanted to install and launch Riak on my Ubuntu 9.04. It did not work out first: my conveniently apt-get-installed Erlang 5.6.4 happened to be incompatible with Riak, so I needed the latest Erlang.
There is also a minor README issue, which is easily remediated below. I hope by the time you will read this that one will be fixed. But the routine I found works with current version OK anyway.
So, the terribly convenient packager sudo apt-get install erlang does not yet install the newer Erlang 5.7.3, so I had to compile the latest otp_src_R13B02-1.tar.gz myself.
Here is the sequence which led me to success:
0. sudo apt-get remove erlang >> that depends on your installation. In my case I also needed >> 0.1 sudo apt-get clean >> 0.2 sudo apt-get remove erlang-base ------------------------------------------------------------------- 1. sudo apt-get install build-essential libncurses5-dev m4 2. sudo apt-get install openssl libssl-dev 3. tar -zxf otp_src_R13B02-1.tar.gz 4. cd otp_src_R13B02-1 5. ./configure 6. make 7. sudo make install
Now if you are lucky, you have the right version of Erlang:
Now we have to fight with Riak documentation, which is out of sync with Riak too. It seems to be the good sign: real hackers never write documentation! (c) some classics... [EDIT] Actually, I have to credit this team with a bunch of nice intros and comments:
Nothing happens. Riak is supposed to work in the backround now. Is it?
serge@ubuntu:~/src/riak$ ./start-fresh.sh config/riak-demo.erlenv Attempting to connect to 'riakdemo@127.0.0.1' with cookie riak_demo_cookie... Connected successfully Looking for pre-existing object at {<<"riak_demo">>, <<"demo">>}... Pre-existing object found, modifying Storing object with new value... Written successfully Fetching object at {<<"riak_demo">>, <<"demo">>}... Fetched successfully Object contained correct value SUCCESS
"SUCCESS" looks like success!
Now there is the point where README is [EDIT] no longer! [/EDIT] "lying", so we will do the right thing on our own:
The previous post was about merge of the StickyNotes app with WebMachine. The release 0.1 was deployed on BitBucket
That was the "don't touch anything" approach. The solution was elegant, but the data flow mechanisms were all hidden by the POST.
In the release 0.2 the create, update, delete, read operations were made explicit through the HTTP POST, PUT, DELETE and GET which potentially allows the web server to cache the data and reduce the load.
The home page allows you to check / turn on / turn off / view the WebMachine TRACE without any extra coding. All functionality is already supplied in the admin resource.
The JQuery was extended for PUT and DELETE in the application.js - some say that the next JQuery release will have it all. Anyway, it was pretty straitforward copy/paste.
The notes.erl was slightly edited for the read method to put the data structure inline with the other access methods.
The release 0.2 code with pre-compiled binaries is platform-independent and ready to run. You have to install Erlang/OTP first. After downloading and unpacking the zip, use the start.sh (or start.cmd on Windows) and point your browser to http://127.0.0.1:8000/
[Edit] The story continues with release 0.2: the application goes RESTful. Plus, you can now turn the webmachine TRACE ON/OFF right from your browser. [/Edit]
Kevin Smith from Hypothetical labs has recently stressed the value of the WebMachine. It happened to be not just yet another "cool" stuff, but the real thing.
Thanks, Kevin!
The best way for me to play with and learn it is to use the real working application. That one happened to be the very nice StickyNotes created by the Hughes Waroquers of BeeBole. Actually, the BeeBole itself has a trailer which looks inspiring and I see where the roots of coolness come from.
The post by Justin Sheehy shows how to integrate the StickyNotes with the original design of the WebMachine. He took the StickyNotes "as-is" and put them under the Webmachine layer.
But the snippets given there which add up to the resource module, do not work with the newest WebMachine releases. They added the wrq module to encapsulate the Request Data manipulation. The calling convention is updated as well.
So, to make it up and go a bit further, I have created a new repository in bitbucket. It took the StickyNotes client and DB parts "as is" and on the protocol level (as was suggested by Justin):
separate static pages and dynamic traffic between different resources
restrict access methods for each resource
send read requests through GET, not POST (coming soon)
The distribution, when downloaded is ready to launch (use the run.cmd on Windows) . It only needs working Erlang/OTP.
The home page is now a "pure" AJAX web client, allowing to switch GET/POST.... and Content Type and see the results in detail, very much like Curl (not that powerful, but handy). I left some debug prints on the server to see how the data reflects in the back-end.
The (PUT,Authorization,ETAGS,DELETE) posts of Bryan Fink were extremely helpful to get the concept. Well, POST is not there, but it is here now! And last, but not the least is the the very Justin's screencast, which is was very inspiring for me.