Yes, I did it again. If Adobe ever kills ColdFusion you can blame me. ;) This is just an FYI to let folks know I've rewritten ColdFusion Bloggers as a Node.js site running on the AppFog platform. To be clear, no, I'm not trying to kill ColdFusion! I'm migrated off my old ColdFusion server and setting up my old sites in a simpler form because - well - I want my life to be simpler. My only real "server" will be this blog, and as I'm still adjusting the settings a bit and tuning WordPress, I want every other thing I run to be as simple and low-maintenance as possible. Plus - I also kinda want to get better at Node.js!
As before, if folks are curious about the code, I've put it up on GitHub for you to look at and laugh at: https://github.com/cfjedimaster/nodecfbloggers. To be clear, this is not meant to be an example of good Node.js programming. It is just meant to be... well.. an example. (And let me publicly thank Derick Bailey. After I posted about the CFLib migration, he shared some online training he created with me. I haven't had a chance to check it out yet, but I definitely appreciate him sharing his knowledge with me.)
For the most part the conversion was simple. As with CFLib, I wrote a script in ColdFusion that used CFMongoDB to insert the data into Mongo. For folks curious as to how that looked, here is the script.
mongoConfig = createObject('component','cfmongodb.core.MongoConfig').init(dbName="cfbloggers");
mongo = createObject('component','cfmongodb.core.MongoClient').init(mongoConfig);
blogquery = queryExecute("select id, name, description, url, rssurl, status from blogs");
writedump(var=blogquery,top=3);
blogs = mongo.getDBCollection("blogs");
blogs.remove({});
entries = mongo.getDBCollection("entries");
entries.remove({});
entriesAdded = 0;
blogsAdded = 0;
for(i=1; i<=blogquery.recordCount;i++) {
row = blogquery.getRow(i);
doc = {
"name":row.name,
"description":row.description,
"url":row.url,
"rssurl":row.rssurl,
"status":row.status
};
blogs.save(doc);
blogsAdded++;
entryquery = queryExecute("select id, blogidfk, title, url, posted, content, categories, created from entries where blogidfk=:blog", {blog:row.id});
if(i == 1) writedump(var=entryquery,top=4);
for(k=1; k<=entryquery.recordCount;k++) {
row = entryquery.getRow(k);
entrydoc = {
"blog":doc._id,
"title":row.title,
"url":row.url,
"posted":row.posted,
"content":row.content,
"categories":row.categories,
"created":row.created
};
entries.save(entrydoc);
entriesAdded++;
}
}
writeOutput("<p>Done. Blogs added: #blogsAdded#. Entries added: #entriesAdded#</p>");
I then went about rebuilding the functionality with Node. I removed quite a bit - including all user management. I didn't have many users and the main functionality (alerts for keywords) can easily be done with IFTTT. I had an alert for my name that I've already moved over there. If folks need help with it, let me know. I also ripped out the jQuery UI, removed the Ajax page loading, and just simplified as much as possible.
Total side note: The better I get at front end stuff and server stuff - the more I want to make things as simple as possible. Am I alone in that?
During the rewrite there were two really interesting parts I enjoyed writing. First, here is how I handled running the aggregation every hour:
var cron = require('cron');
var cronJob = cron.job('0 * * * *', function() {
aggregator.process();
console.log('cron job complete');
});
cronJob.start();
Even though I find the cron format confusing as hell, I love how simple that is. For folks not aware, don't forget ColdFusion 11 also lets you define scheduled tasks for an application. (I'll blog an example of that later this week. I haven't tried it yet and I want to set up a good demo.) You can read more about this Node module here: https://github.com/ncb000gt/node-cron
The next part I enjoyed was parsing RSS feeds. I used Node-Feedparser, which worked incredibly well. I had assumed that part of the rewrite was going to take me a few hours, but I finished in less than one. To be fair, it isn't exactly like the ColdFusion version. I'm not doing the conditional HTTP get, but with far fewer blogs to parse nowadays I'm not as concerned about it. On the flip side, cffeed doesn't let you parse RSS from pure data so I had to use the file system in the ColdFusion version. That's not something I had to worry about here.
Oh, and once again, I used FormKeep to handle my form. It works. It's simple. And it's free.
Archived Comments
is there a way on the new site to display more entries per page with a URL flag (I believe before it was "perpage=50")?
I did not add that - but if you would like me too, it would take about 10 minutes. Say the word.
word!
Done. But I want to wait a bit before deploying as it takes the site down for about 3 minutes. Check around 5CST and if you don't see it working, let me know.
will do... you da man!
Looks like all the recent feeds have no title, they also all say written by Raymond Camden, which is funny, since Adam Tuttle's post on the recent plagiarism came through. ;)
Also, a bit off-topic, but do you have any easy way to adjust your code samples to word wrap or make the site width a bit larger? Had to copy and paste the code into sublime to get a good look at it. :)
There was a bug I fixed with the RSS. I'm seeing the proper titles in the feed. Are you not? Also, I left AUTHOR off the item field but it is on the top level meta. Are you seeing it *per* item? If so, I don't mean to intend that for sure.
On the code stuff - um - maybe. :) It's all on Github though.
Ok, it is picking up the author and putting it in the items. I'll fix.
It might just be Feedly caching the data too.
Fixed.
I see the option to generate a daily report is gone now - right?
Correct - but if you subscribe to the mail option, it is the same thing - a daily email.
Thanks - I'm not seeing a way to subscribe to the mail option though. I looked in FAQ - not there.
Click the RSS link - on the Feedburner page you should see the option. It may not say "daily", just email, but I'm 90% sure it acts as a daily email.