Bob sent in an interesting email to me this morning:
Ray, we're about to use spry on one of our sites but i was wondering what you would recommend to be the max records to return in the XML? We may have 2,000 records returned at a time (in a directory format)?Do you have any performance tips etc.
When it comes to determining how many rows of data should be returned, you can't just focus on the number of rows. You must also consider the size of each row. So imagine a set of XML that had one column per row:
<people>
<name>Jack Abbot</name>
<name>Victor Newman</name>
<name>Nick Newman</name>
</people>
In the example above the XML only contains the person's name. If it contained other information (gender, age, marital status, salary, etc), then the size of the XML dramatically increases with each row. So the first thing you want to do is get a gut feeling for the size of your rows.
Next you want to figure out how long its taking for the browser to load your XML. The best tool for that (well, for Firefox users) would be Firebug. It lets you trace AJAX requests. This includes the URL, the response, and important to you in this post is the time it took to load. If you can't use Firebug, you should also look at ServiceCapture. I actually use them both as ServiceCapture is great for monitoring Flash Remoting requests.
One of these days I'm going to get together a simple demo showing how to combine client side paging and server side paging to handle very large data sets in Spry.
Archived Comments
I agree, it's not the number of rows, but the total size of the XML doc that must be considered. A lot of it has to do with the capabilities of the client, so you have to really plan for the lowest common denomenator. That being said, I would suggest some sort of pagination, if possible when using a large payload.
It's easier to eat a meal in many smaller bites than trying to consume it in one large gulp.
Great thanks Ray.
Imagine this: You have a directory that returns thousands of rows. Bringing this back in XML is not the best way to do it. But your client wants to get data via a text box and filter/sort the data against all the records in the DB.
We want to use the features of Spry.Data.XMLDataSet and the FilterData function (from the adobe example page) but it looks like SPRY isn;t the best solution here - would you agree, or can you or anyone else suggest a way around this?
Bob,
I think it would depend on how your users where filtering the data; will they be typing the first 2-3 letters and expecting the list to narrow down? Is there some subset you can give them quickly and then start downloading datasets in the background, or use their typing to identify the next batch to grab?
/ejt
Also consider your audience. I do intranet apps, 2000 rows, allthough big, might not be so bad with some gZip action on your web server. I'd certainly try to restrict this down a bit even so or use pagination as Ray mentioned. Take a look at this
http://www.nitobi.com/produ...
live scrolling. Should be able to pull this off with Spry too, though to be honest I have not tried.
DK
Hey Bob, what you could do is make a new XML call when your user searches. You can set a new XML path as shown here: http://labs.adobe.com/techn...
This should allow you to filter the return data as show in the labs example you discuss and it should allow you to limit the XML called back.
HTH
Nick
Someone needs to come up with a component that will feed on demand segments of data to a component. Rather than returning all the data or pages... how about a scroll bar that is used to set which section of the data is displayed?