Wednesday, December 17, 2008
Monday, December 8, 2008
I know many of you out there have been hoping for just such an integration, so this is both a heart warming development to complement your regular Holiday cheer as well as a hint of the many interesting things the future holds for BlazeDS and LCDS.
Sunday, November 23, 2008
Courtesy of The Economist:
"Mr Geithner looks a lot younger than his 47 years (though not as young as he did before the crisis began). He skateboards and snowboards and exudes a sort of hipster-wonkiness, using “way” as a synonym for “very” as in “way consequential” and occasionally underlining his point with the word “f***”."
By implementing the same basic app on both BlazeDS and LCDS, they serve as a simple illustration of how the up-front design considerations as well as the actual implementation of a real-time collaborative app will vary depending on which server library you're using. The delta isn't too serious for simple examples, but LCDS makes things much easier, and the pay-off is way above linear as the complexity of your application and its data model grows. Take a careful look through the configuration as well as the client and server source code for both samples to get a better sense for how the channels/endpoints and services you'd use differ, as well as how your application's use of service destinations would differ.
I've zipped up each demo as a fully self-contained, exploded web app that you can deploy to the JEE app server or Servlet container of your choice. If you're running Tomcat with HTTP on port 8400, deploying these should be very simple. You should be able to simply drop them into your /webapps directory and then add Context entries for them.
I hard-coded my channel/endpoint URLs to use localhost:8400 for HTTP in the BlazeDS demo and the LCDS demo uses RTMP on the default port of 1935. If you need to be using different IPs/domain names/ports locally, simply open up the corresponding /WEB-INF/flex/services-config.xml file and adjust the channel/endpoint URL values accordingly.
Both demos include the integrated web tier compiler so you can browse to either /bchat/BlazeSimpleChat.mxml or /chat/LCDSSimpleChat.mxml respectively and compile and run the app without having to set up, build and compile the client swf manually or in FlexBuilder. This also means you can tweak things, and recompile easily, as you play with the apps.
The one thing both demos depend on is a MySQL database. If you don't have MySQL running locally, go grab it and install it. Once installed, you'll need to define a database for each demo.
For the BlazeDS demo, create the database from the MySQL command line with:
CREATE DATABASE bchat_db;
For the LCDS demo, do:
CREATE DATABASE chat_db;
We also need to define the account the demos use to connect to and manage the databases:
GRANT ALL PRIVILEGES ON *.* TO 'javauser'@'localhost'
IDENTIFIED BY 'javapass' WITH GRANT OPTION;
That's wide-open, and if you have additional security considerations, you'd need to take them into account by limiting the grants to just these two test databases. The MySQL docs are an excellent resource if you have any questions about these steps.
The demos both use Hibernate to simplify the management of persistent application data, and the hibernate.cfg.xml files for both apps should be updated to drop, recreate and re-init these databases the first time you start your web app(s). To do this, uncomment the following line in these files (the copies under WEB-INF/classes, not /WEB-INF/src):
Note: the import.sql file in the same directory is used to initialize the data in your database when you have this hibernate property uncommented.
So, without any further ado, links to download the demo zips:
BlazeDS Simple Chat Demo
LCDS Simple Chat Demo
And a link to download the PowerPoint slide-deck for my talk (don't overlook the notes on each slide - they provide more info and background beyond the high level points):
Building Real-Time and Collaborative Applications with Flex and BlazeDS
PS: The session itself was recorded, and the video as well as the PowerPoint slide deck should be available soon to MAX attendees via the MAX site. The video should end up publicly available at some point but I don't have a hard date for that.
Wednesday, November 19, 2008
I can't help but think it was influenced by Fully Flared, which I believe was a sea change and from here on out things will only get better. For those who haven't had the pleasure, every time I watch the opening sequence it's as refreshing as the first time. Followed by Mike Mo's opener, and everything else... words don't do it justice.
Thursday, November 13, 2008
Post-MAX, I'll be posting my demo source, etc. here.
Absolutely not to be missed is Jeff Vroom's talk on 'Advanced Declarative Persistence Using JPA and LiveCycle Data Services' on Tuesday, the 18th.
For those working with ColdFusion along with BlazeDS and LiveCycle Data Services, Tom Jordahl's talk, also on Tuesday the 18th should be excellent.
Christophe and Anil Channappa will be running workshops every day to get folks up and running with deploying BlazeDS and LiveCycle Data Services.
And that's just my little corner of the world.
Monday, November 3, 2008
My colleague Jono Spiro hassled me to post this for my own benefit weeks ago, so here it is, fashionably late... This is culled from conversations with my pal, and ex-Player dev, Peter Grandmaison along with my own experiences working with LocalConnections
LocalConnection uses a block of shared memory to exchange messages between player processes. Outgoing messages targeting a LocalConnection are serialized and added to a queue within the sending player process, and these messages are moved into shared memory during idle time if there's room. Similarly, if messages sent to a LocalConnection owned by the player process are resident in shared memory, they're drained from shared memory during this idle time and processed by the player.
If the recipient process goes away without cleanly releasing its ownership of a LocalConnection, this can end up wedging the shared memory for the LocalConnection because messages aren't being removed for processing. Without any draining, there's no room for new messages.
The most common scenario leading to this is a player instance (or Breeze plugin, or AIR, or...) crashing or being force quit. Stopping a debug session in FlexBuilder behaves this way.
So, a fix was implemented where any player process polling shared memory will expunge anything stored there for more than 5 seconds regardless of which LocalConnection the messages are associated with. This reap cycle after 5 seconds prevents shared memory for a LocalConnection from wedging indefinitely - yay!
The implied caution here is to not send messages to a LocalConnection that will take the receiving process longer than a second or two to process. If you do, you run the risk of subsequent messages magically vanishing into the ether if they're reaped due to the 5 second rule.
In addition to the 5 second rule there's a size limit of 40K on the data sent in each message, and that's covered in the LiveDocs for LocalConnection.
So the take away is to keep your LocalConnection interactions short and sweet.
If you're trying to ship a large amount of data from one player process to another, do your best to limit the total amount of data you need to send, and if it's still substantial then slice it up to send in chunks. Make sure each chunk is guaranteed to be processed by the receiver(s) in well under 5 seconds.
The simplest way to handle this sort of chunking is to have the sending player process open its own LocalConnection that the receiving processes can 'ack' back over. This lets you chain the transfer like so, and protects against the sender potentially flooding the receiver:
Player 1 ---- sends data chunk over primary LC ---> Player 2
Player 1 <----- returns ack over secondary LC ------ Player 2
... rinse and repeat ...
Also, if the process that opened a LocalConnection doesn't exit cleanly, other processes will receive a "connection in use" error if they attempt to create a LocalConnection having the same name. You can work around this by implementing a retry on an interval slightly longer than 5 seconds, which gives the current process a chance to clean up the orphaned shared memory at which point it can successfully create the desired LocalConnection.