xbill
|
| Posted: 02/12/2002, 9:05 AM |
|
I have several large tables that
I want to create a grid to browse over
(using standard grid view with x records per
page).
On small tables- there are fewer records
so the time to build the grid page is quick.
On larger tables, it seems like CodeCharge
is reading the whole table even though I want
to see just the first x records per page.
(it seems like the generated SQL does a select
over the whole table instead of just the
records to display on the current page).
Is there an easy way to tweak the generated SQL to use
something like the MySql LIMIT option to narrow
the select to the number of items per page?
thanks,
-bill
|
|
|
 |
Brent
|
| Posted: 02/12/2002, 9:26 AM |
|
I'd like to see this feature too. Let me know when you have it working. :)
Basically you need to make use of the "LIMIT [offset,] rows" feature of
MySQL. For page 1 the offset would be 0 since you want it to start from
the first returned row. For each page you need to increase offset by the
number of rows displayed in the grid, 10 for example.
So if the grid displays 10 rows per page, and you're on page 4 the SQL
should have "LIMIT ((pagenum-1)*rowsperpage), rowsperpage" or in this case
"LIMIT 30,10". It looks like you have to pass the current page#
as a parameter and add the LIMIT clause to the SQL statement. This is easy
enough to do.
But here's the catch. You will have to either trick the current Next/Prev
buttons into thinking there are rows before and after the current page, or
replace them with buttons of your own. You can trick the Next button simply
by setting Rows to 1 more than the number of rows on the page.
But how do you trick the Prev button because each page starts at an offset
so there are no previous rows in the result set.
Using LIMIT will definitely speed up grids on large tables. Let us know if
you solve the Prev button problem.
|
|
|
 |
Xander
|
| Posted: 02/12/2002, 11:22 PM |
|
I guess CC needs a universal solution accross all db types. One mechanism we have used (in Java outside of CC - haven't tried in CC yet) is to firstly only get the prime keys (usually an integer type column). We then navigate through this and dynamically construct an sql IN() clause for the current page and do another query to retrieve only the records for that page. The key here is that the first query is quick since it returns only an integer column. It would be interesting to try and do this in CC...
|
|
|
 |
Brent
|
| Posted: 02/13/2002, 8:10 AM |
|
Xander,
>>I guess CC needs a universal solution accross all db types. One mechanism we have used (in
>>Java outside of CC - haven't tried in CC yet) is to firstly only get the prime keys (usually an
>>integer type column). We then navigate through this and dynamically construct an sql IN()
>>clause for the current page and do another query to retrieve only the records for that page.
>>The key here is that the first query is quick since it returns only an integer column. It would be
>>interesting to try and do this in CC...
That is an interesting solution and avoids any problem with sorting on the
non-primary index (because you're using IN and not Rcd_Id >=n and Rcd_Id < n+10).
But this of course means you need to prefetch the next '10' records from the table
using another query in order to constuct the IN SQL for the grid. So how do you
limit the number of rows returned by this query? It's like a catch-22. :)
Your pre-fetch query needs to have some sort of limit on it so the pre-fetch
query doesn't return all 10 million rows when you try and build the IN statement
for each page. So you need to set up some sort of LIMIT on that query.
Or is there another way around this problem?
|
|
|
 |
Ron Cicotte
|
| Posted: 02/27/2002, 11:31 AM |
|
You guys have the right idea but the wrong location. Dealing with a keyed subset of a large table is obviously the right solution but this kind of data manipulation is best accomplished using a stored procedure in the database. A simple call to a stored procedure passing the criteria of the subset you are looking for. The stored procedure can then use #temp tables to build an index of the rows you are looking for and then join the #temp table on the large table. The IN clause is a notoriously costly statement and should never be used on large tables because it will require a table scan on each element in the set. ...better to get the primary keys using a while loop and then join on the #temp table. You will also want to use an index on the fields used in the criteria. That way you will avoid a scan on the that operation.
The bottom line is it is best accomplished in a stored procedure on the database and never use the IN statement on large tables. This comes from over 20 years experience in Data Architecture and Systems Integration.
Ron Cicotte -
President Summer Street Systems http://summerstreet.net
ron.cicotte@summerstreet.net
|
|
|
 |
ethar1
|
| Posted: 03/18/2003, 10:32 PM |
|
me too
|
|
|
 |