////////////////////////////////////
CREATE PROCEDURE dbo.test2
(
@.Page_No int,
@.PageSize int,
@.RowCount int output
)
AS
declare @.intBeginID int
declare @.intEndID int
declare @.Counter int
set @.Counter = 1
select @.RowCount = count(*) from project
declare cro_fastread cursor scroll
for select * from project order by sector
open cro_fastread
select @.intBeginID=(@.Page_No-1)*@.PageSize+1
select @.intEndID = @.intBeginID+@.PageSize-1
fetch absolute @.intBeginID from cro_fastread
WHILE @.@.FETCH_STATUS = 0 and @.Counter < @.PageSize
BEGIN
set @.Counter = @.Counter + 1
FETCH NEXT FROM cro_fastread
END
close cro_fastread
deallocate cro_fastread
/////////////////////////////////////////
my problem is the "FETCH" method can only get a row at one time, so there would be return many recordsets, how can I merge all the return recordset into one.
thanks in advance!Why not use the built-in paging features of the DataGrid?|||that's too slow if the results is large|||Hi,
This article might help:
Creating a Stored Procedure for Custom Paging with the ASP.NET DataGrid Control
Regards,|||The article on SQL Junkies uses a temp table and 2 queries. Here's my sexy solution without temp tables and only one query:
Let's say your page size is 10 and you want page 3. Basically what you do is you get the TOP 40 records, and from that set get the bottom 10 records by using TOP and ordering it DESCENDING.
I don't know if it will be faster than the SqlJunkies.com solution, but it will at the very least be in the same ballpark. You'll need to use dynamic SQL, since TOP cannot take variables.
Here it is: (you can run it on Northwind)
|||After thinking about it, there is an even better way to do it: Get the top 40 records and use a left join to exclude the top 30 records. My solution above will not work properly (as it is) for the last page in the set, since it always gets 10 records. Here's my second try:
declare @.PageNo int,
@.PageSize int,
@.Query varchar(1000)set @.PageNo = 3
set @.PageSize = 10set @.Query = ' select O.*
from (select top ' + cast(@.PageSize as varchar) + '
OrderID
from (select top ' + cast(@.PageSize * (@.PageNo + 1) as varchar) + '
OrderID
from Orders
order by OrderID) as T
order by T.OrderID desc) as B
join Orders O
on B.OrderID = O.OrderID
order by O.OrderID'exec(@.Query)
declare @.PageNo int,
@.PageSize int,
@.Query varchar(1000)set @.PageNo = 3
set @.PageSize = 10set @.Query = ' select O.*
from (select top ' + cast(@.PageSize * (@.PageNo + 1) as varchar) + '
OrderID
from Orders
order by OrderID) as T
left join (select top ' + cast(@.PageSize * (@.PageNo) as varchar) + '
OrderID
from Orders
order by OrderID) as B
on T.OrderID = B.OrderID
join Orders O
on T.OrderID = O.OrderID
where B.OrderID is null
order by O.OrderID'exec(@.Query)
Again, you can run this query on Northwind.|||This is one nice :), but (as there is always a but :P) it has a 2 drawbacks:
Due to the TOP clauses the higher the page number is, the slower the query will run, while a #temp table solution has the same performance for all pages. Also increasing the page size will drop the performance exponentially (again due to using TOP).
So doubt anyone (dare :P) will come up with something better/faster than a #temp table solution :).|||Yea it probably will be slightly slower - although we'd have to test it to be sure. With very large tables you'll probably not be able to use the Temp table solution anyway, since the WHOLE TABLE is written to the temp table for EVERY query. In that case you'll have to use a helper column that's updated by a trigger.
That will be faster than the temp table and my single-select query!
So I prefer my single-select just because it's simple and sexy, or for tables too big for it (more than 1 million rows) I'd do the trigger helper column.
But think about it: Will you really want to page through 10,000 pages of 100 rows each?
NO.|||:: Yea it probably will be slightly slower - although we'd have to test it to be sure.
Ofc i did test it BEFORE i did post my reply - using the Northwind database. Here are the results:
>> times taken from SQL-QueryAnalyzer Execution Plan
>> no index or other optimizations
>> hardware IDE-HDD, 80GB, 2 * AMD 1800+
Orders table, 830 records
===================
Pageing, using TOP:
First Page >> 0.0253
Last Page >> 0.224
Pageing, using a temp table:
First Page >> 0.088
Last Page >> 0.088
Payments table, 100.000 records
========================
Pageing, using TOP:
First Page >> 9,75
Last Page >> 12.8
Pageing, using a temp table:
First Page >> 5.967
Last Page >> 5.967
The main difference is that the solution using TOP is very CPU expensive due to the subselects needed to be ordered - check the execution plan an see what i mean - while a temp table solution is less CPU expensive (only one select/order) and the i/o cost of the #temp table is marginal (0.35 sec of the 5.967 seconds is i/o time).
This is why the larger the table gets, the slower the top version will be.
:: With very large tables you'll probably not be able to use the Temp table solution
:: anyway, since the WHOLE TABLE is written to the temp table for EVERY query.
Not the whole tables, only the ID column needs to be written to the temp table.
:: In that case you'll have to use a helper column that's updated by a trigger.
Not sure what you mean here ?
:: That will be faster than the temp table and my single-select query!
See result above.|||Yea, so the temp table is 50% faster. Not exponentially slower as claimed.
But still, you don't think there's a faster solution? My trigger-updated helper column idea WILL be faster. By orders of magnitude. If you're interested I'll post the full solution.|||Sure i am.|||:: Yea, so the temp table is 50% faster. Not exponentially slower as claimed
I didn'T claim that, i was not comparing the two solutions when i said: "Also increasing the page size will drop the performance exponentially (again due to using TOP)."
I was saying that a TOP solution will run exponentially slower the larger the pagenumber is (see result 0.02 for first page and 0.2 for last page), while for a temp table the page size does not affect the performance of the query (same performance for all pages).|||I am still interested to see it and waiting ... :)|||The triggers are trivial, so I won't write them for you, I'll instead explain the idea.
What you do is you add an int column to the table, call it OrderingID. This column is kept up to date with insert, update and delete triggers on the table. The triggers will ensure that the values in that column are unique, incremental (no gaps between numbers) and in the same sort order as the paging.
So it will have the same values as the temp table's auto-increment column, and therefore will negate the need of a massive temp table altogether.
Getting page 3 is now as simple as
select *
from TableName
where OrderingID >= (3*10)
and OrderingID < ((3+1)*10)
order by OrderingID
And as a bonus you can now add an index to that column for lightning selects.
The triggers will ensure that all the requirements for the above select are met (numbers from 0 to N-1, uniqueness, no gaps in sequence) at all times. These triggers are very simple and anyone should be able to do them.|||Oh you meant that ... well but it is only managable if you have 1-2 sort criterias. As soon as you need to order by any column (damn i always have to do that :S) you can't use this method effectively.
Another way would be creating an indexed view for each order criteria - but haven't looked into them yet.
No comments:
Post a Comment