Quantcast
Channel: Forum Getting started with SQL Server
Viewing all articles
Browse latest Browse all 7129

Efficient way to join huge tables

$
0
0

I have a table with 20M rows, and each row has 5 columns: time, id, value, value_lst, value_nxt. For each id and time, there is a value for the status. The first three columns are known and I want to know value_lst and value_nxt, i.e.: the values of the last and the next periods for a specific time and id, and have the following query to create tables and calculate value

create table tab1 (id nvarchar(12),time int, value nvarchar(8),

value_lst nvarchar(8), value_nxt nvarchar(8))


insert tab1 with (tablock) (id,time,value) 
select id,time,value
from tab0

update a1
set  a1.value_lst = b1.value,
a1.value_nxt = c1.value
from tab1 a1
left join tab1 b1
on a1.id = b1.id
and a1.time = b1.time + 1
left join tab1 c1
on a1.id = c1.id
and a1.time = c1.time - 1

 

Tab0 is the source table where I get id/time/value.

It seems that the query takes forever and the log file increased by more than 10 GB. I'm wondering what's the most efficient way to write this query? I know using index will speed up the joining process, but how can I reduce the logging?

I'm using SQL server 2016 on Win10 64bit.

Thanks,

Jason



Viewing all articles
Browse latest Browse all 7129

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>