SQL Server performance: one-to-one join, or select within select? -


i have schema involves customers, facilities each customer, projects each facility, , items each project.

i want know if there general rule of thumb concerning performance of listing customer each item. 1 way think common is:

select     item.iteminfo     customer.customerinfo      item inner join      project on item.projectid = project.projectid inner join      facility on project.facilityid = facility.facilityid inner join      customer on facility.customerid = customer.customerid     (item filtering criteria) 

each table indexed on primary , foreign keys.

however, in benchmarking, found more performant:

select     item.iteminfo     (select top 1 customerinfo      customer      customerid = (select top 1 customerid                          facility                          facilityid = (select top 1 facilityid                                              project                                              project.projectid = item.projectid)                        )      )      item      (item filtering criteria) 

is there reason difference? if so, how can utilize make other queries more performant?

thanks!

in questions sql, it's include ddl+dml sample data. i'll add script end of post use ctes generate bunch of test data.

after few tests, find joins more performant.

if run 2 queries against each other single item id (where item.itemid = 500, e.g.), each query has 50% cost.

itemid = 500

if use range - in case, itemid between 200 , 8000, query cost starts favor joins (19% 81%), , joins execute faster consistently.

itemid between 200 , 8000

i check speed this:

declare @start datetime set @start = getdate()  query 1  select getdate() - @start set @start = getdate()  query 2  select getdate() - @start 

if increase range itemid between 200 , 80000, see more separation - query cost 5% 95% in favor of joins, , joins excute ~330 ms in scenario vs ~420 ms alternative query.

itemid between 200 , 80000

is there real unique clause? maybe there's sargability issue or something.

here's ddl/dml, creates 100 customers, ~1000 facilities, ~10000 projects, ~100000 items:

create table customer (customerid int primary key,customerinfo varchar(25)) create table facility (facilityid int primary key, customerid int foreign key references customer(customerid)) create table project (projectid int primary key, facilityid int foreign key references facility(facilityid)) create table item (itemid int primary key, iteminfo varchar(25), projectid int foreign key references project(projectid)) go  ;with cte (select 1 id, 'customer' + cast(1 varchar(5)) info union select cte.id + 1 id, 'customer' + cast(cte.id + 1 varchar(5)) cte  cte.id < 100) insert customer select id, info cte option(maxrecursion 100) go  ;with cte (select 1 id, 1 customerid union select cte.id + 1, ((cte.id + 1) / 10) + 1 cte  cte.id < 999) insert facility select id, customerid cte option(maxrecursion 1000) go  ;with cte (select 1 id, 1 facilityid union select cte.id + 1, ((cte.id + 1) / 10) + 1 cte  cte.id < 9989) insert project select id, facilityid cte option(maxrecursion 10000) go  ;with cte (select 1 id, 1 projectid, 'item' + cast(1 varchar(5)) iteminfo union select cte.id + 1, ((cte.id + 1) / 10) + 1, 'item' + cast(cte.id + 1 varchar(5)) cte  cte.id < 99889) insert item select id, iteminfo, projectid cte option(maxrecursion 0) go 

Comments

Popular posts from this blog

android - InAppBilling registering BroadcastReceiver in AndroidManifest -

python Tkinter Capturing keyboard events save as one single string -

sql server - Why does Linq-to-SQL add unnecessary COUNT()? -