ColdFusion Bloggers migrated to Node.js

This post is more than 2 years old.

Yes, I did it again. If Adobe ever kills ColdFusion you can blame me. ;) This is just an FYI to let folks know I've rewritten ColdFusion Bloggers as a Node.js site running on the AppFog platform. To be clear, no, I'm not trying to kill ColdFusion! I'm migrated off my old ColdFusion server and setting up my old sites in a simpler form because - well - I want my life to be simpler. My only real "server" will be this blog, and as I'm still adjusting the settings a bit and tuning WordPress, I want every other thing I run to be as simple and low-maintenance as possible. Plus - I also kinda want to get better at Node.js!

As before, if folks are curious about the code, I've put it up on GitHub for you to look at and laugh at: https://github.com/cfjedimaster/nodecfbloggers. To be clear, this is not meant to be an example of good Node.js programming. It is just meant to be... well.. an example. (And let me publicly thank Derick Bailey. After I posted about the CFLib migration, he shared some online training he created with me. I haven't had a chance to check it out yet, but I definitely appreciate him sharing his knowledge with me.)

For the most part the conversion was simple. As with CFLib, I wrote a script in ColdFusion that used CFMongoDB to insert the data into Mongo. For folks curious as to how that looked, here is the script.


mongoConfig = createObject('component','cfmongodb.core.MongoConfig').init(dbName="cfbloggers");

mongo = createObject('component','cfmongodb.core.MongoClient').init(mongoConfig);

blogquery = queryExecute("select id, name, description, url, rssurl, status from blogs");
writedump(var=blogquery,top=3);	

blogs = mongo.getDBCollection("blogs");
blogs.remove({});

entries = mongo.getDBCollection("entries");
entries.remove({});

entriesAdded = 0;
blogsAdded = 0;

for(i=1; i<=blogquery.recordCount;i++) {
	row = blogquery.getRow(i);
	doc = {
		"name":row.name,
		"description":row.description,
		"url":row.url,
		"rssurl":row.rssurl,
		"status":row.status		
	};
	blogs.save(doc);
	blogsAdded++;

	entryquery = queryExecute("select id, blogidfk, title, url, posted, content, categories, created from entries where blogidfk=:blog", {blog:row.id});
	if(i == 1) writedump(var=entryquery,top=4);	
		
	for(k=1; k<=entryquery.recordCount;k++) {
		row = entryquery.getRow(k);
		entrydoc = {
			"blog":doc._id,
			"title":row.title,
			"url":row.url,
			"posted":row.posted,
			"content":row.content,
			"categories":row.categories,
			"created":row.created		
		};
		entries.save(entrydoc);
		entriesAdded++;
	
	}

}

writeOutput("<p>Done. Blogs added: #blogsAdded#. Entries added: #entriesAdded#</p>");

I then went about rebuilding the functionality with Node. I removed quite a bit - including all user management. I didn't have many users and the main functionality (alerts for keywords) can easily be done with IFTTT. I had an alert for my name that I've already moved over there. If folks need help with it, let me know. I also ripped out the jQuery UI, removed the Ajax page loading, and just simplified as much as possible.

Total side note: The better I get at front end stuff and server stuff - the more I want to make things as simple as possible. Am I alone in that?

During the rewrite there were two really interesting parts I enjoyed writing. First, here is how I handled running the aggregation every hour:


var cron = require('cron');
var cronJob = cron.job('0 * * * *', function() {
	aggregator.process();
	console.log('cron job complete');
});
cronJob.start();

Even though I find the cron format confusing as hell, I love how simple that is. For folks not aware, don't forget ColdFusion 11 also lets you define scheduled tasks for an application. (I'll blog an example of that later this week. I haven't tried it yet and I want to set up a good demo.) You can read more about this Node module here: https://github.com/ncb000gt/node-cron

The next part I enjoyed was parsing RSS feeds. I used Node-Feedparser, which worked incredibly well. I had assumed that part of the rewrite was going to take me a few hours, but I finished in less than one. To be fair, it isn't exactly like the ColdFusion version. I'm not doing the conditional HTTP get, but with far fewer blogs to parse nowadays I'm not as concerned about it. On the flip side, cffeed doesn't let you parse RSS from pure data so I had to use the file system in the ColdFusion version. That's not something I had to worry about here.

Oh, and once again, I used FormKeep to handle my form. It works. It's simple. And it's free.

Raymond Camden's Picture

About Raymond Camden

Raymond is a developer advocate for HERE Technologies. He focuses on JavaScript, serverless and enterprise cat demos. If you like this article, please consider visiting my Amazon Wishlist or donating via PayPal to show your support. You can even buy me a coffee!

Lafayette, LA https://www.raymondcamden.com

Archived Comments

Comment 1 by Andy K posted on 1/26/2015 at 6:42 PM

is there a way on the new site to display more entries per page with a URL flag (I believe before it was "perpage=50")?

Comment 2 (In reply to #1) by Raymond Camden posted on 1/26/2015 at 6:45 PM

I did not add that - but if you would like me too, it would take about 10 minutes. Say the word.

Comment 3 (In reply to #2) by Andy K posted on 1/26/2015 at 6:47 PM

word!

Comment 4 (In reply to #3) by Raymond Camden posted on 1/26/2015 at 6:54 PM

Done. But I want to wait a bit before deploying as it takes the site down for about 3 minutes. Check around 5CST and if you don't see it working, let me know.

Comment 5 (In reply to #4) by Andy K posted on 1/26/2015 at 7:52 PM

will do... you da man!

Comment 6 by Scott Busche posted on 1/27/2015 at 2:39 AM

Looks like all the recent feeds have no title, they also all say written by Raymond Camden, which is funny, since Adam Tuttle's post on the recent plagiarism came through. ;)

Also, a bit off-topic, but do you have any easy way to adjust your code samples to word wrap or make the site width a bit larger? Had to copy and paste the code into sublime to get a good look at it. :)

Comment 7 (In reply to #6) by Raymond Camden posted on 1/27/2015 at 2:42 AM

There was a bug I fixed with the RSS. I'm seeing the proper titles in the feed. Are you not? Also, I left AUTHOR off the item field but it is on the top level meta. Are you seeing it *per* item? If so, I don't mean to intend that for sure.

On the code stuff - um - maybe. :) It's all on Github though.

Comment 8 (In reply to #7) by Raymond Camden posted on 1/27/2015 at 2:44 AM

Ok, it is picking up the author and putting it in the items. I'll fix.

Comment 9 (In reply to #7) by Scott Busche posted on 1/27/2015 at 2:46 AM

It might just be Feedly caching the data too.

Comment 10 (In reply to #8) by Raymond Camden posted on 1/27/2015 at 2:48 AM

Fixed.

Comment 11 by Lola Lee Beno posted on 2/2/2015 at 12:30 PM

I see the option to generate a daily report is gone now - right?

Comment 12 (In reply to #11) by Raymond Camden posted on 2/2/2015 at 10:11 PM

Correct - but if you subscribe to the mail option, it is the same thing - a daily email.

Comment 13 (In reply to #12) by Lola Lee Beno posted on 2/3/2015 at 11:13 AM

Thanks - I'm not seeing a way to subscribe to the mail option though. I looked in FAQ - not there.

Comment 14 (In reply to #13) by Raymond Camden posted on 2/3/2015 at 12:08 PM

Click the RSS link - on the Feedburner page you should see the option. It may not say "daily", just email, but I'm 90% sure it acts as a daily email.