SQLServerWiki

“The Only Thing That Is Constant Is Change”

Ways to migrate SQL Server User databases from one storage to another.

Posted by database-wiki on April 28, 2016

  1. Copy the files. This would be easy but relatively slow, also requires taking the database offline to move the files. Since uptime is paramount, this is a no-go.
  1. Backup and restore with new name onto the new SAN. This is a very strong candidate, due to its simplicity.

Restoring a backup on the new storage (with a different database name) while the current database remains online.

Create a job to restore the Transaction Log backup (incurred in the meantime) of original database to renamed database.

Cutting over would involve backing up the tail of the transaction log on the old database, restoring it on the new one, and then swapping names.

The cutover would be quick enough to fit in a maintenance window, making this a very attractive option.

  1. BACKUP LAST TRAN LOG On Primary database.

BACKUP LOG Check21DB TO DISK = ‘C:\MSSQL\BACKUP\FTRN\’ WITH norecovery

  1. RESTORE the above LAST LOGs for each database ON renamed database with recovery

RESTORE LOG DBNAME FROM DISK = ‘\\SecondaryServer\C$\MSSQL\Backup\FTRN\xxx.trn’ WITH RECOVERY

Note: This will have an overhead on the instance as we are introducing a duplicate copy of large databases on the same instance which will increase the resource consumption and may have performance impact.

  1. Add and remove files from the filegroup. This approach is quite slow and tedious. SQL Server allows you to add and remove files from filegroups at almost any time,

and this can be leveraged to move data between different storage systems. What makes it awesome is that the operation is completely online with no interruptions to the user,

all types of data (including LOB data) is moved, and it works in any edition of SQL Server.

Note: As always, there are trade-offs. It’s very slow, and LOB data makes it even slower.

Shrinking the data files may introduce fragmentation but can be solved using REBUILD or REORGANIZE, and if speed is not important, it could be the way to go.

  1. Mirroring. Much like arolling upgrade, mirroring can be used.

Steps:

  1. Setup a database mirroring on a secondary instance. Secondary database(s) to be placed on Nimble Storage.
  2. Failover the database(s) to secondary instance.
  3. Remove Mirroring and Setting Secondary Database Online.

ALTER DATABASE [database_name] SET PARTNER OFF

GO

RESTORE DATABASE [database_name] WITH RECOVERY

GO

  1. Detach the Secondary database from secondary instance followed by removing the clustered disk and adding it to primary instance.
  2. Remove or rename the primary database(s).
  3. Attach the database(s) from the Nimble storage.

Note: Unfortunately both primary database and secondary database cannot reside on the same instance so a secondary instance is needed.

  1. Log shipping: As an alternate to mirroring, we can use log shipping.

Steps:

  1. Setup a Log Shipping on a secondary instance. Secondary database(s) to be placed on Nimble Storage.
  2. Changing the secondary as primary:
  3. Ensure all scheduled backups have completed successfully.
  4. STOP/DISABLE LOG SHIPPING BACKUP JOBS.
  5. RUN (LOG SHIPPING) TRAN LOG COPY AND RESTORE JOBS ON Secondary until the last log is applied then DISABLE all copy & restore jobs.
  6. REMOVE THE LOG-SHIPPING FROM PRIMARY SERVER’S MAINTENANCE PLANS for the required Db’s.
  7. Go to maintenance jobs folder ON Primary.
  8. delete destination server information from inside the maintenance job for each DB.

iii.           remove log shipping & delete maintenance job.

  1. Kill all users IN Required database on primary server.
  2. BACKUP LAST TRAN LOG On Primary Server and place them is easy accessible folder Eg.(\\Primary Server Name\C$\MSSQL\Backup\FTRN)
  3. BACKUP LOG Check21DB TO DISK = ‘C:\MSSQL\BACKUP\FTRN\’ WITH norecovery
  4. COPY LAST LOGS TO Secondary Server to say ‘FTRN’ folder (\\Secondary Server \C$\MSSQL\Backup\FTRN)

E.g: xp_cmdshell ‘copy \\PriamryServer \C$\MSSQL\BACKUP\FTRN\*.trn \\SecondaryServer\C$\MSSQL\Backup\FTRN’

  1. RESTORE the above LAST LOGs for each database ON Secondary with recovery

RESTORE LOG DBNAME FROM DISK = ‘\\SecondaryServer\C$\MSSQL\Backup\FTRN\xxx.trn’ WITH RECOVERY

  1. Detach the Secondary database from secondary instance followed by removing the clustered disk and adding it to primary instance.
  2. Remove or rename the primary database(s).
  3. Attach the database(s) from the Nimble storage.

Note: More complicated than mirroring and both primary database and secondary database cannot reside on the same instance.

Posted in CORE ISSUES, DATABASE SPACE MANAGEMENT, Uncategorized | Leave a Comment »

Disk Benchmark testing using SQLIO.

Posted by database-wiki on April 28, 2016

SQL Server I/O characteristics.

OPERATION RANDOM/SEQUENTIAL READ/WRITE SIZE RANGE
Create Database Sequential Write 512KB
Backup Database Sequential Read/Write Multiples of 64KB
Restore Database Sequential Read/Write Multiples of 64KB
DBCC – CHECKDB Sequential Read 8KB-64KB
Alter Index – on -rebuild (Read Phase) Sequential Read 64KB-512KB
Alter Index – on -rebuild (Write Phase) Random or Sequential Write 8KB upto 128KB
Sys.dm_db_index_physical_stats Sequential Read 8KB-64KB
Insert / Update / Delete Random or Sequential Write 64KB-512KB
Select Random or Sequential Read 64KB-512KB
TempDB Random or Sequential Read/Write 8KB-64KB
Transaction Log Sequential Write 60KB

Summary of SQL Server I/O Patterns that Determine SQLIO parameters.

  1. SQL Server only uses one worker thread per logical CPU at a time and this is managed by SOS Scheduler inside SQL Server. So SQLIO Param.txt file should be configured such that we use the number of logic CPUs  as number of threads.
  2. SQL Server performs data writes periodically, when Checkpoint runs (or under memory pressure when Lazy Writer kicks in). Data modifications are done in memory and those changes flushed to disk by a single system thread that executes Checkpoint (or Lazy Writer). Therefore it makes no sense to configure more than 1 thread in Params.txt when testing writes. In addition, SQL Server writes up to 32 pages – 256 KB – at a time. Therefore, it makes sense to use 256 KB as write size. One exception is Eager Writes, which are performed by minimally logged operations. Those would be 8 KB (or perhaps 64 KB in size).
  3. SQL Server performs data reads continuously. Most of the reads are done in 8 KB pages (singe page reads). Reads-ahead reads are done in 64-page (512 KB) chunks. Reading full extents – 64 KB in size- are also somewhat common. Therefore these three sizes are the only read sizes.
  4. Transaction Log writes vary in size depending on workload (transaction performed) but typically do not exceed 60 KB. Therefore, choosing a size to test for log writes is difficult, but as a test, I better we choose 8 KB (which will cover both eager writes and log writes).
  5. SQL Server is designed to maximize sequential I/O as much as possible (both reads or writes). Random and Sequential I/O play a smaller role in today’s SAN systems because of large storage caches, optimized read/write mechanisms, multiple spindles, etc.

Based on these patterns, below are the list of commands that I have used.

Random Read Tests

Similar to single-page reads (8 KB) in SQL. Use as many threads as logical CPUs in the Param.txt. 

In our case 48 CPU’s and we are using 90 GB files for testing: 

O:\SQLIOMP\Disk1\testfile.dat 48 0x0 92160

sqlio -kR -s7200 -frandom -o8 -b8 -LS -FparamMT.txt > Reads8KRandom8Oustanding.txt

Similar to extent reads I/O 64KB; use as many threads as CPUs in the Param.txt

sqlio -kR -s7200 -frandom -o8 -b64 -LS -FparamMT.txt > Reads64KRandom8Oustanding.txt

Similar to Read-Ahead in SQL; use as many threads as CPUs

sqlio -kR -s7200 -frandom -o8 -b512 -LS -FparamMT.txt > Reads512KRandom8Oustanding.txt

Random Write tests

8 KB Writes – similar to single-page writes in SQL, which are rare, or potentially similar to Log Writes though log write sizes vary . Also Eager Writes may be similar; use as many threads as CPUs in the Param.txt

sqlio -kW -s7200 -frandom -o8 -b8 -LS -FparamMT.txt > Writes8KRandom8Outstanding.txt

256 KB Writes similar to Checkpoint in SQL with a realistic outstanding I/O count 100. This will really push the I/O subsystem, but is realistic and can happen. Use 1 (max 2 threads) in Param.txt – similar to checkpoint.

In our case 48 CPUs and we are using 90 GB files for testing: :

O:\SQLIOMP\Disk1\testfile.dat 1 0x0 92160

sqlio -kW -s7200 -frandom -o100 -b256 -LS -FparamST.txt > Writes256KRandom100Outstanding.txt

256 KB Writes similar to Checkpoint in SQL with a possible realistic outstanding I/O count 200. This will really, really push the I/O subsystem, but could happen. Use 1 thread in Param.txt – similar to checkpoint

sqlio -kW -s7200 -frandom -o200 -b256 -LS -FparamST.txt > Writes256KRandom200Outstanding.txt

Sequential Read Tests

sqlio -kR -s7200 -fsequential -o8 -b8 -LS -FparamMT.txt > Reads8Ksequential8Oustanding.txt

sqlio -kR -s7200 -fsequential -o8 -b64 -LS -FparamMT.txt > Reads64Ksequential8Oustanding.txt

sqlio -kR -s7200 -fsequential -o8 -b512 -LS -FparamMT.txt > Reads512Ksequential8Oustanding.txt

Sequential Write tests

sqlio -kW -s7200 -fsequential -o8 -b8 -LS -FparamMT.txt > Writes8Ksequential8Outstanding.txt

sqlio -kW -s7200 -fsequential -o100 -b256 -LS -FparamST.txt > Writes256Ksequential100Outstanding.txt

sqlio -kW -s7200 -fsequential -o200 -b256 -LS -FparamST.txt > Writes256Ksequential200Outstanding.txt

SQLIO parameter reference:

  • -kW and -kR: means we’re testing writes or reads
  • -t8 and -o8: means 8 threads with up to 8 outstanding requests at once.  SQLIO isn’t CPU-bound at all, and you can use more threads than you have processors.  The more load we throw at storage, the faster it goes – to a point.
  • -s7200: means the test will last 120 minutes.(2 hrs)
  • -b8 , -b64, -b512: the size of our IO requests in kilobytes.  SQL Server does a lot of random stuff in 8KB chunks, and we’re also testing sequential stuff in 64KB chunks.
  • -frandom and -fsequential: random versus sequential access.  Many queries jump around randomly in the database, whereas things like backups, bulk loads, and table scans generally work sequentially.

A sample of SQLIO is shown below.

8KB random reads using 90 GB file with 8 outstanding I/O request:

sqlio v1.5.SG
using system counter for latency timings, 2539062 counts per second
parameter file used: paramMT.txt
file O:\SQLIOMP\Disk1\testfile.dat with 48 threads (0-47) using mask 0x0 (0)
48 threads reading for 7200 secs from file O:\SQLIOMP\Disk1\testfile.dat
using 8KB random IOs
enabling multiple I/Os per thread with 8 outstanding
size of file O:\SQLIOMP\Disk1\testfile.dat needs to be: 96636764160 bytes
current file size: 0 bytes
need to expand by: 96636764160 bytes
expanding O:\SQLIOMP\Disk1\testfile.dat … done.
using specified size: 92160 MB for file: O:\SQLIOMP\Disk1\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 15504.09
MBs/sec: 121.12
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 24
Max_Latency(ms): 4887
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 13 21 7 4 3 3 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 32

sqlio v1.5.SG
using system counter for latency timings, 2539062 counts per second
parameter file used: paramMT.txt
file P:\SQLIOMP\SQLIO_01\testfile.dat with 48 threads (0-47) using mask 0x0 (0)
48 threads reading for 7200 secs from file P:\SQLIOMP\SQLIO_01\testfile.dat
using 8KB random IOs
enabling multiple I/Os per thread with 8 outstanding
size of file P:\SQLIOMP\SQLIO_01\testfile.dat needs to be: 96636764160 bytes
current file size: 0 bytes
need to expand by: 96636764160 bytes
expanding P:\SQLIOMP\SQLIO_01\testfile.dat … done.
using specified size: 92160 MB for file: P:\SQLIOMP\SQLIO_01\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 18325.52
MBs/sec: 143.16
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 20
Max_Latency(ms): 58819
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 3 3 3 3 5 6 7 19 13 6 4 2 3 2 1 1 1 1 1 1 1 1 1 1 13

sqlio v1.5.SG
using system counter for latency timings, 1948244 counts per second
parameter file used: paramMT.txt
file H:\SQLIOMP\SQLIO_01\testfile.dat with 40 threads (0-39) using mask 0x0 (0)
40 threads reading for 7200 secs from file H:\SQLIOMP\SQLIO_01\testfile.dat
using 8KB random IOs
enabling multiple I/Os per thread with 8 outstanding
size of file H:\SQLIOMP\SQLIO_01\testfile.dat needs to be: 96636764160 bytes
current file size: 0 bytes
need to expand by: 96636764160 bytes
expanding H:\SQLIOMP\SQLIO_01\testfile.dat … done.
using specified size: 92160 MB for file: H:\SQLIOMP\SQLIO_01\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 191786.53
MBs/sec: 1498.33
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 1
Max_Latency(ms): 113060
histogram:
ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 60 12 4 6 12 4 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Converting the output of SQLIO to a more readable output using SQLIOResult2Excel.ps1

Code for SQLIOResult2Excel.ps1 (From the internet)

<–Begin

## @The MTC Blog

# SQLIO Result Parser by Chai Swaddipong
# Feel free to pass along/improve

# Parsing was tested with the results from sqlio v1.5.SG; results displayed in Excel2013
# Start Excel application
$xl=New-Object -ComObject “Excel.Application”

#Constant
$xlChart=[Microsoft.Office.Interop.Excel.XLChartType]
$xlDirection=[Microsoft.Office.Interop.Excel.XLDirection]

# Create a workbook
$wb=$xl.Workbooks.Add()
$ws = $wb.Worksheets.Item(“Sheet1″)

$cells=$ws.Cells

$cells.item(1,1)=”SQLIO Test Results from ” + $args[0]
$cells.item(1,1).font.bold=”True”
$cells.item(1,1).font.size=18

#Location of the header text
$row=3
$col=1

#insert header text

“Threadcount”,”Outstanding IO”,”Testfile Path”,”Datafile size”,”Duration”,”IO Pattern”,”Bandwidth(MB/s)”,”IOPs”,”Average latency(ms)” | foreach {
$cells.item($row,$col)=$_
$cells.item($row,$col).font.bold=$True
$col++
}

# Loop through lines of the result file

ForEach ($line in Get-Content $args[0] )
{

# Remove leading spaces
$line -replace ‘^\s+’, ”

# How many threads? Read or write test?How long the test runs? What is the test file?
if ( $line -match “^\d+ threads \w*” )
{
$TestDetails = $line.Split(” “)
$ThreadCount = $TestDetails[0]

if ( $TestDetails[2] -match “reading”)
{
$ReadOrWrite = $TestDetails[2].replace(“reading”,”read”)
}
else
{
$ReadOrWrite = $TestDetails[2].replace(“writing”,”write”)
}

$TestDuration = $TestDetails[4]
$TestFilePath = $TestDetails[8]
}

# IO size for the test? Sequential or random IO pattern?
if ( $line -match ” \w* (sequential|random)”)
{
$IoToTest = $matches[0].split(” “)
$ioSize = $IoToTest[1]
$RandomOrSeq = $IoToTest[2]
}

# Size of the datafile?
if ( $line -match “\w+: \d+ \w+ ” )
{
$DataFileSize = ($matches[0].Split(“:”))[1]
$FieldCount++
}

# IOs per thread
if ( $line -match ” \d+ outstanding” )
{
$QueueDepth = ($matches[0].Split(” “))[1]
}

# Bandwidth attained
if ( $line -match “MBs/sec” )
{
$Bandwidth = ($line.Split(“:”))[1]
}

# Average latency attained
if ( $line -match “Avg_Latency” )
{
$AverageLatency = ($line.Split(“:”))[1]
}

# Throughput attained
if ( $line -match “IOs/sec” )
{
$Throughput = ($line.Split(“:”))[1]
}

# End of each run of SQLIO. Write out results
if ($line.startswith(“histogram”) )
{
write-host “Writing to Excel–>” $ThreadCount $QueueDepth $TestFilePath $DataFileSize $TestDuration $ioSize $RandomOrSeq $ReadOrWrite $Bandwidth $Throughput $AverageLatency
$row++
$col=1
$cells.item($row,$col)=$ThreadCount
$col++
$cells.item($row,$col)=$QueueDepth
$col++
$cells.item($row,$col)=$TestFilePath
$col++
$cells.item($row,$col)=$DataFileSize
$col++
$cells.item($row,$col)=$TestDuration
$col++
$cells.item($row,$col)=$ioSize+$RandomOrSeq+$ReadOrWrite
$col++
$cells.item($row,$col)=$Bandwidth
$col++
$cells.item($row,$col)=$Throughput
$col++
$cells.item($row,$col)=$AverageLatency
}

}

#Sort on Bandwidth
$objRange = $ws.range(“A4:I$row”)
$objRange2 = $ws.Range(“G4”)
[void]$objRange.Sort($objRange2)
$chart=$ws.Shapes.AddChart().Chart
$chart.chartType=$xlChart::xlBarClustered

$start=$ws.range(“G3”)

#get the last cell
$Y=$ws.Range($start,$start.End($xlDirection::xlDown))
$start=$ws.range(“F3”)

#get the last cell
$X=$ws.Range($start,$start.End($xlDirection::xlDown))

$chartdata=$ws.Range(“G$($Y.item(1).Row):G$($Y.item($Y.count).Row),F$($X.item(1).Row):F$($X.item($X.count).Row)”)
$chart.SetSourceData($chartdata)

$chart.ChartTitle.Text = “Bandwidth”
$ws.shapes.item(“Chart 1”).top=60
$ws.shapes.item(“Chart 1”).left=600
$ws.shapes.item(“Chart 1”).width=600
$ws.shapes.item(“Chart 1”).height=300

#Sort on IOP
$objRange = $ws.range(“A4:I$row”)
$objRange2 = $ws.Range(“H4”)
[void]$objRange.Sort($objRange2)

$chart=$ws.Shapes.AddChart().Chart
$chart.chartType=$xlChart::xlLine

$start=$ws.range(“H3”)

#get the last cell
$Y=$ws.Range($start,$start.End($xlDirection::xlDown))
$start=$ws.range(“F3”)

#get the last cell
$X=$ws.Range($start,$start.End($xlDirection::xlDown))

$chartdata=$ws.Range(“H$($Y.item(1).Row):H$($Y.item($Y.count).Row),F$($X.item(1).Row):F$($X.item($X.count).Row)”)
$chart.SetSourceData($chartdata)

$chart.ChartTitle.Text = “IOPs”
$ws.shapes.item(“Chart 2”).top=360
$ws.shapes.item(“Chart 2”).left=600
$ws.shapes.item(“Chart 2”).width=600
$ws.shapes.item(“Chart 2”).height=300

#Sort on average latency
$objRange = $ws.range(“A4:I$row”)
$objRange2 = $ws.Range(“I4”)
[void]$objRange.Sort($objRange2,2)

$chart=$ws.Shapes.AddChart().Chart
$chart.chartType=$xlChart::xlLine

$start=$ws.range(“I3”)

#get the last cell
$Y=$ws.Range($start,$start.End($xlDirection::xlDown))
$start=$ws.range(“F3”)

#get the last cell
$X=$ws.Range($start,$start.End($xlDirection::xlDown))

$chartdata=$ws.Range(“I$($Y.item(1).Row):I$($Y.item($Y.count).Row),F$($X.item(1).Row):F$($X.item($X.count).Row)”)
$chart.SetSourceData($chartdata)

$chart.ChartTitle.Text = “Average Latency”
$ws.shapes.item(“Chart 3”).top=660
$ws.shapes.item(“Chart 3”).left=600
$ws.shapes.item(“Chart 3”).width=600
$ws.shapes.item(“Chart 3″).height=300
#Show time!!
$xl.visible=”True”

End–>

Command to run:

PS C:\Users\bmani\Desktop\SQLIO\SQLIO-HUS150-64GB-Disk1\SQLIO> .\SQLIOResult2Excel.ps1 C:\Users\bmani\Desktop\SQLIO\SQLO-HUS150-64GB-Disk4\SQLIO\Reads8KRandom8Oustanding.txt

8KB random reads using 90 GB file with 8 outstanding I/O request:

Threadcount Outstanding IO Testfile Path Datafile size Duration IO Pattern Bandwidth(MB/s) IOPs Average latency(ms)
48 8 O:\SQLIOMP\Disk4\testfile.dat 92160 MB 7200 8KBrandomread-HUS150 48.64 6226.01 61
48 8 P:\SQLIOMP\SQLIO_04\testfile.dat 92160 MB 7200 8KBrandomread-Nimble 455.38 58288.99 6
40 8 H:\SQLIOMP\SQLIO_04\testfile.dat 92160 MB 7200 8KBrandomread-Pure 1473.68 188631.44 1

PS1

Hope this help!

 

 

Posted in GENERAL PERFORMANCE | Leave a Comment »

Troubleshooting PLE issue:

Posted by database-wiki on April 11, 2016

Problem Description:
===================
We have observed that SQL Server PLE value has hit all-time record low in one of our environment.

Analysis:
========
Currently we have 16 gigs of memory in the server out of which 12 gigs has been allocated to SQL Server instance.

This is not to say that the instance is under memory pressure. If that’s the case, you will see the below error in the SQL Server error log and it’s a critical issue that needs immediate attention.

“A significant part of sql server process memory has been paged out. This may result in a performance degradation. Duration: 0 seconds. Working set (KB): 1086400, committed (KB): 2160928, memory utilization: 50%.”

During the above scenario an activity called workset trimming happens. Where windows signals SQL Server memory to move the pages in buffer pool to the windows page file. This scenario happens as a result of memory contention faced by entire server due to SQL and workset trimming hugely degrades the SQL performance.

As far as PLE issue, SQL Server instance has started to suffocate due to the shortage of contiguous memory in the buffer pool for optimal memory management.

PLE value is generally calculated using the below formula.

PLE = (DataCacheSizeInGB/4GB * 300 seconds)

so the value should ideally be (12GB/4GB) X300 = 900 seconds.

Below query give you the PLE value for the instance.

SELECT [object_name],
[counter_name],
[cntr_value]
FROM sys.dm_os_performance_counters
WHERE [object_name] LIKE ‘%Manager%’
AND [counter_name] = ‘Page life expectancy’

After monitoring the instance for some time, this value vacillates between 5, and as high as 200.

This is evident that the SQL server is frequently load and unloads pages from the buffer pool, with all the pages being used, and then a query run that causes a huge amount of different data to be read from disk, displacing much of what’s already in memory and causing a precipitous drop in PLE.

If you see DB1 pages in the buffer pool it’s close to 4 gigs. At any given point if SQL Server cannot find a contiguous space to accommodate 4 gigs worth of pages (Every ETL cycle) It will flush the existing pages of DB1 database file and load the other required pages.

Below is the query that will give you the datapages size in MB for all the databases.

DECLARE @total_buffer INT;
SELECT @total_buffer = cntr_value
FROM sys.dm_os_performance_counters
WHERE RTRIM([object_name]) LIKE’%Buffer Manager’
AND counter_name = ‘Total Pages’;

;WITH BufCount AS
(
SELECT
database_id, db_buffer_pages = COUNT_BIG(*)
FROM sys.dm_os_buffer_descriptors
WHERE database_id BETWEEN 5 AND 32766
GROUP BY database_id
)
SELECT
[Database_Name] = CASE [database_id] WHEN 32767
THEN’MSSQL System Resource DB’
ELSE DB_NAME([database_id]) END,
[Database_ID],
db_buffer_pages as [Buffer Count (8KB Pages)],
[Buffer Size (MB)] = db_buffer_pages / 128,
[Buffer Size (%)] = CONVERT(DECIMAL(6,3),
db_buffer_pages * 100.0 / @total_buffer)
FROM BufCount
ORDER BY [Buffer Size (MB)] DESC;

=> Below query give you the which object has got the large chunk of pages in bufferpool.

SELECT count(1)/128 AS cached_MB
, name
, index_id
FROM sys.dm_os_buffer_descriptors AS bd with (NOLOCK)
INNER JOIN
(
SELECT name = OBJECT_SCHEMA_NAME(object_id) + ‘.’ + object_name(object_id)
–name = ‘dbo.’ + cast(object_id as varchar(100))
, index_id
, allocation_unit_id
FROM sys.allocation_units AS au with (NOLOCK)
INNER JOIN sys.partitions AS p with (NOLOCK)
ON au.container_id = p.hobt_id
AND (au.type = 1 OR au.type = 3)
UNION ALL
SELECT name = OBJECT_SCHEMA_NAME(object_id) + ‘.’ + object_name(object_id)
–name = ‘dbo.’ + cast(object_id as varchar(100))
, index_id
, allocation_unit_id
FROM sys.allocation_units AS au with (NOLOCK)
INNER JOIN sys.partitions AS p with (NOLOCK)
ON au.container_id = p.partition_id
AND au.type = 2
) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
WHERE database_id = db_id()
GROUP BY name, index_id
HAVING Count(*) > 128
ORDER BY 1 DESC;

=> Pages from clustered index loaded to the buffer pool means we are having a query that does an index scan on the object.

=> Below query is used to get the execution plan for the index on the table with large number of pages in the bufferpool.

DECLARE @IndexName SYSNAME = ‘[PK_DB1]’;
DECLARE @DatabaseName SYSNAME;

SELECT @DatabaseName = ‘[‘ + DB_NAME() + ‘]’;

WITH XMLNAMESPACES
(DEFAULT ‘http://schemas.microsoft.com/sqlserver/2004/07/showplan&#8217;)
SELECT
n.value(‘(@StatementText)[1]’, ‘VARCHAR(4000)’) AS sql_text,
n.query(‘.’),
cp.plan_handle,
i.value(‘(@PhysicalOp)[1]’, ‘VARCHAR(128)’) AS PhysicalOp,
i.value(‘(./IndexScan/@Lookup)[1]’, ‘VARCHAR(128)’) AS IsLookup,
i.value(‘(./IndexScan/Object/@Database)[1]’, ‘VARCHAR(128)’) AS DatabaseName,
i.value(‘(./IndexScan/Object/@Schema)[1]’, ‘VARCHAR(128)’) AS SchemaName,
i.value(‘(./IndexScan/Object/@Table)[1]’, ‘VARCHAR(128)’) AS TableName,
i.value(‘(./IndexScan/Object/@Index)[1]’, ‘VARCHAR(128)’) as IndexName,
i.query(‘.’),
STUFF((SELECT DISTINCT ‘, ‘ + cg.value(‘(@Column)[1]’, ‘VARCHAR(128)’)
FROM i.nodes(‘./OutputList/ColumnReference’) AS t(cg)
FOR XML PATH(”)),1,2,”) AS output_columns,
STUFF((SELECT DISTINCT ‘, ‘ + cg.value(‘(@Column)[1]’, ‘VARCHAR(128)’)
FROM i.nodes(‘./IndexScan/SeekPredicates/SeekPredicateNew//ColumnReference’) AS t(cg)
FOR XML PATH(”)),1,2,”) AS seek_columns,
RIGHT(i.value(‘(./IndexScan/Predicate/ScalarOperator/@ScalarString)[1]’, ‘VARCHAR(4000)’), len(i.value(‘(./IndexScan/Predicate/ScalarOperator/@ScalarString)[1]’, ‘VARCHAR(4000)’)) – charindex(‘.’, i.value(‘(./IndexScan/Predicate/ScalarOperator/@ScalarString)[1]’, ‘VARCHAR(4000)’))) as Predicate,
cp.usecounts,
query_plan
FROM ( SELECT plan_handle, query_plan
FROM ( SELECT DISTINCT plan_handle
FROM sys.dm_exec_query_stats WITH(NOLOCK)) AS qs
OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) tp
) as tab (plan_handle, query_plan)
INNER JOIN sys.dm_exec_cached_plans AS cp
ON tab.plan_handle = cp.plan_handle
CROSS APPLY query_plan.nodes(‘/ShowPlanXML/BatchSequence/Batch/Statements/*’) AS q(n)
CROSS APPLY n.nodes(‘.//RelOp[IndexScan/Object[@Index=sql:variable(“@IndexName”) and @Database=sql:variable(“@DatabaseName”)]]’ ) as s(i)
–WHERE i.value(‘(./IndexScan/@Lookup)[1]’, ‘VARCHAR(128)’) = 1
OPTION(RECOMPILE, MAXDOP 1);

=> Query to find the buffer pool usage by objects:

IF OBJECT_ID(‘TempDB..#BufferSummary’) IS NOT NULL BEGIN
DROP TABLE #BufferSummary
END

IF OBJECT_ID(‘TempDB..#BufferPool’) IS NOT NULL BEGIN
DROP TABLE #BufferPool
END

CREATE TABLE #BufferPool
(
Cached_MB Int
, Database_Name SysName
, Schema_Name SysName NULL
, Object_Name SysName NULL
, Index_ID Int NULL
, Index_Name SysName NULL
, Used_MB Int NULL
, Used_InRow_MB Int NULL
, Row_Count BigInt NULL
)

SELECT Pages = COUNT(1)
, allocation_unit_id
, database_id
INTO #BufferSummary
FROM sys.dm_os_buffer_descriptors
GROUP BY allocation_unit_id, database_id

DECLARE @DateAdded SmallDateTime
SELECT @DateAdded = GETDATE()

DECLARE @SQL NVarChar(4000)
SELECT @SQL = ‘ USE [?]
INSERT INTO #BufferPool (
Cached_MB
, Database_Name
, Schema_Name
, Object_Name
, Index_ID
, Index_Name
, Used_MB
, Used_InRow_MB
, Row_Count
)
SELECT sum(bd.Pages)/128
, DB_Name(bd.database_id)
, Schema_Name(o.schema_id)
, o.name
, p.index_id
, ix.Name
, i.Used_MB
, i.Used_InRow_MB
, i.Row_Count
FROM #BufferSummary AS bd
LEFT JOIN sys.allocation_units au ON bd.allocation_unit_id = au.allocation_unit_id
LEFT JOIN sys.partitions p ON (au.container_id = p.hobt_id AND au.type in (1,3)) OR (au.container_id = p.partition_id and au.type = 2)
LEFT JOIN (
SELECT PS.object_id
, PS.index_id
, Used_MB = SUM(PS.used_page_count) / 128
, Used_InRow_MB = SUM(PS.in_row_used_page_count) / 128
, Used_LOB_MB = SUM(PS.lob_used_page_count) / 128
, Reserved_MB = SUM(PS.reserved_page_count) / 128
, Row_Count = SUM(row_count)
FROM sys.dm_db_partition_stats PS
GROUP BY PS.object_id
, PS.index_id
) i ON p.object_id = i.object_id AND p.index_id = i.index_id
LEFT JOIN sys.indexes ix ON i.object_id = ix.object_id AND i.index_id = ix.index_id
LEFT JOIN sys.objects o ON p.object_id = o.object_id
WHERE database_id = db_id()
GROUP BY bd.database_id
, o.schema_id
, o.name
, p.index_id
, ix.Name
, i.Used_MB
, i.Used_InRow_MB
, i.Row_Count
HAVING SUM(bd.pages) > 128
ORDER BY 1 DESC;’

EXEC sp_MSforeachdb @SQL

SELECT Cached_MB
, Pct_of_Cache = CAST(Cached_MB * 100.0 / SUM(Cached_MB) OVER () as Dec(20,3))
, Pct_Index_in_Cache = CAST(Cached_MB * 100.0 / CASE Used_MB WHEN 0 THEN 0.001 ELSE Used_MB END as DEC(20,3))
, Database_Name
, Schema_Name
, Object_Name
, Index_ID
, Index_Name
, Used_MB
, Used_InRow_MB
, Row_Count
FROM #BufferPool
ORDER BY Cached_MB DESC

=> once you get the execution plan. Check if the plan is optimal meaning if there is any missing indexes leading to page scan.
=> In our case creating the missing indexes increase the PLE value to an optimum level.

Posted in GENERAL PERFORMANCE | Leave a Comment »

The distribution agent may experience high latency when you configure transactional replication with articles that are configured not to replicate changes as parameterized statements.

Posted by database-wiki on April 11, 2016

Symptoms

In Microsoft SQL Server 2005 or Microsoft SQL Server 2008, you set up a transactional replication.

  • The transactional replication contains one or more articles.
  • For one or more of the articles you specify a non default value for the @status parameter (default is either 16 or 24) of the sp_addarticle stored procedure. Or, you specify the value for the @ins_cmd, @del_cmd and @upd_cmd parameter of sp_addarticle stored procedure to SQL

In this scenario, you may find that the Distribution Agent will exhibit high latency when replicating data to the subscriber. Additionally you many notice high values for NETWORKIO wait type on the SPID corresponding to the Distribution Agent process on the distributor.

Note: You can confirm the above symptom by generating the script for your publication and by looking at the value of @status parameter for procedure sp_addarticle for each of the articles in the publication. If this value is less than 16, then you are not using parameterized statements for propagating changes to the subscriber for the specified article and hence this article applies in your scenario.

Cause

Replicating commands without using parameterized statements causes each command in a transaction to be sent individually via RPC. Thus when you have a transaction with multiple commands, distribution agent will require multiple round trips to replicate the transaction to the subscriber thereby increasing the latency. On the other hand, when you use parameterized statements (the default setting), distribution agent groups multiple commands in a single transaction into one or more batches and replicates them to the subscriber thereby reducing network round trips and latency.

Note: In a SQL profiler trace a batch would show up as one pair of RPC Starting and RPC completed event.

Resolution

To resolve the problem, change affected article’s status property to “parameters” using sp_changearticle stored procedure. This stored procedure should be executed at the Publisher on the publication database.

The syntax is as follows:

Exec sp_changearticle @publication='<PublicationName>’, @article='<ArticleName>’,@property=’status’,@value=’parameters’

Note: Running the above procedure does not cause or require re-initialization of the subscription.

Posted in REPLICATION | Leave a Comment »

Distribution agent is skipping transactions at the subscriber.

Posted by database-wiki on April 11, 2016

Symptoms

ErrorCode = ‘20598’
ErrorText = ‘The row was not found at the Subscriber when applying the replicated command.’

Cause

Incorrect steps were followed when setting up subscriber using @sync_type = N’initialize with backup’. As per BOL, this can’t be done using GUI and can only be done using TSQL scripts.

Using TSQL scripts will ensure new commands in the LOG after the backup is taken will be queued in the Distribution DB and Replicated to the Subscriber.

Resolution

To resolve the issue, Create a new publication and set allow “allow_initialize_from_backup” to true on the Subscription options for the publication properties .

If the backup is used, you should use the TSQL to create the subscription. When executing sp_addsubscription you have to mention the backup file used to restore using @backupdevicename. The Dist agent will then execute a “Restore headeronly” command against the backup and will get the lastLSN from the file. The dist agent will then query the msrepl_transactions from that LSN forward and replicate all the changes to the subscriber.

Posted in REPLICATION | Leave a Comment »

How to drop Replication Identity Check Constraint.

Posted by database-wiki on April 11, 2016

Problem Description:

Customer executed sp_changemergearticle to disable Auto Identity Range Management.
One subscriber still had the Identity Check Constraint.

When try to manually remove check constraint get the following error

–TESTING
ALTER TABLE [dbo].[Shippers] DROP CONSTRAINT
[repl_identity_range_259EDBE5_AC01_41C1_ADDB_997EA2098FB2]

Msg 25010, Level 16, State 1, Procedure sp_MSmerge_altertable, Line 284
The constraint is used by merge replication for identity management and cannot be
dropped directly. Execute sp_changemergearticle @publication, @article,
“identityrangemanagementoption”, “none” to disable merge identity management, which
will also drop the constraint.
Msg 21530, Level 16, State 1, Procedure sp_MSmerge_ddldispatcher, Line 182
The schema change failed during execution of an internal replication procedure. For
corrective action, see the other error messages that accompany this error
message.
Msg 3609, Level 16, State 2, Line 1
The transaction ended in the trigger. The batch has been aborted.

Resolution:

Untested and Unsupported Resolution

DISABLE TRIGGER [MSmerge_tr_altertable] ON DATABASE
GO

ALTER TABLE [dbo].[Shippers] DROP CONSTRAINT
[repl_identity_range_259EDBE5_AC01_41C1_ADDB_997EA2098FB2]
GO

ENABLE TRIGGER [MSmerge_tr_altertable] ON DATABASE
GO

Posted in REPLICATION | Leave a Comment »

Quick and easy Script to collect Merge Replication Agent History tables.

Posted by database-wiki on April 11, 2016

Problem Description:
Quick and easy way to collect Merge Replication Agent History tables for SQL 2005.

Resolution:
–Create database to hold a copy of the Agent history tables
Create Database MergeAgentHistory
Go

Select * Into MergeAgentHistory.dbo.MSMerge_history
from distribution.dbo.MSMerge_history
Go

Select * Into MergeAgentHistory.dbo.MSmerge_sessions
from distribution.dbo.MSmerge_sessions
Go

Select * Into MergeAgentHistory.dbo.MSmerge_articlehistory
from distribution.dbo.MSmerge_articlehistory
Go

Select * Into MergeAgentHistory.dbo.MSmerge_agents
from distribution.dbo.MSmerge_agents
Go

Select * Into MergeAgentHistory.dbo.MSrepl_errors
from distribution.dbo.MSrepl_errors
Go

–Backup Database
Backup Database MergeAgentHistory
To disk=’c:\temp\MergeAgentHistory.bak’
Go

Posted in REPLICATION | Leave a Comment »

Steps to aggressively purge replicated transactions from the Distribution database.

Posted by database-wiki on April 11, 2016

Symptoms

Distribution database cleanup agent is not able to purge already replication rows from the Distribution database faster then rows are being added. As result, the distribution database is growing in size.

Cause

To reduce locking contention, the cleanup agent deletes in batches of 2000 and 5000. When a large batch of transaction have been replicated the Distribution cleanup agent may not be able to deletes fast enough to keep up with new rows being added. You may notice the the number of rows in the replication tracking tables growing to over 10+ million.

select count(*) from distribution..msrepl_commands

select count(*) from distribution..msrepl_transactions

Resolution

Distribution Database Cleanup.

Use these steps to aggressively clean-out already replicated data from the Distribution database. These steps will save out all rows since the last replicated transaction from msrepl_transactions and msrepl_commands to temp tables, truncate msrepl_transactions and msrepl_commands, and then re-insert the saved rows. We determine which rows to save by checking the transaction_timestamp value in the MSreplication_subscriptions table at each subscriber for each subscription database..

Reminder –this should work, but be prepared to have to re-initialize the subscriber.

***Steps for Manual Truncation of Distribution Database***

1) Stop all logreader and distribution agents utilizing this distribution server. Use the queries below to identify all Publisher and Subscriber servers.

–returns list of publishers using a distributor

sp_helpdistpublisher

–returns list of published databases

SELECT *

FROM [distribution].[dbo].[MSpublisher_databases]

–Run in each published database

–Returns publication/article/subscriber information

sp_helpsubscription

2) At each subscriber and in each subscriber database, query the msreplication_subscriptions table for the bookmark xactid – you should find only one row per publication in this table at each subscriber. This value indicates what rows have already been delivered to each subscriber.

select transaction_timestamp from msreplication_subscriptions

Here is the value returned for me: 0x00000045000002560003000000000000

3) Determine SMALLEST value of ALL subscribers in ALL subscription databases. Use that value when querying the msrepl_transactions and msrepl_commands tables in the distribution database for rows to preserve. Now, let’s query msrepl_transaction and msrepl_commands, using our transaction_timestamp and see if there are any other commands that have not been propagated to either subscriber

–Be sure to use your smaller transaction_timestamp value here:

select count(*) from distribution..msrepl_commands where xact_seqno>=0x00000045000002560003000000000000

select count(*) from distribution..msrepl_transactions where xact_seqno>=0x00000045000002560003000000000000

4) How many rows do you get back? If around 1 million rows proceed to save those rows. If more, determine which distribution agent is behind (lowest LSN) and see what steps can be used to move that Distribution Agent forward.

5) Save rows returned by the above queries to temp tables before we truncate the Replication tables:

Begin Transaction

select * into distribution..msrepl_commands_temp from distribution..msrepl_commands

where xact_seqno>=0x00000045000002560003000000000000

select * into distribution..msrepl_transactions_temp from distribution..msrepl_transactions

where xact_seqno>=0x00000045000002560003000000000000

–did you get same row counts as in step 3), if yes, then commit the transaction

Commit Transaction

6) To verify you can query the temp tables and make sure that the rows were selected out. Then, perform the following to truncate the two tables in the distribution database:

truncate table distribution..msrepl_commands

truncate table distribution..msrepl_transactions

7) Once these are emptied, insert our rows back in from our temp tables:

insert into distribution..msrepl_commands

select * from distribution..msrepl_commands_temp

insert into distribution..msrepl_transactions

select * from distribution..msrepl_transactions_temp

Now, verify the rows are successfully re-inserted into the two tables. Then start the logreader and distribution agents back up and verify that they are able to successfully synchronize to both subscribers.

Posted in REPLICATION | Leave a Comment »

How to manually clean up SQL Server replication.

Posted by database-wiki on April 11, 2016

Problem Description:

The best option is to use the sp_removeDBreplication. However, if the remove SP
fails, additional clean up may be needed.

The following script can be used to manually clean up replication. I would
recommend first removing the objects which gave errors in the remove replication
SP, then rerunning. sp_removeDBreplication. For example, if the Conflict table
cleanup fails, delete the conflict tables using the script then rerun. The cleanup
SP is sill the most thorough.

Resolution

— Modify the Select and Drop statement as needed.
— Copy the output to another Query Analyzer window.
— Verify no user defined objects are in the list before executing the DROP.

SET NOCOUNT ON
Declare @objname varchar(100), @message varchar(128)
Declare ImageCursor Cursor FOR
Select name from sysobjects where name like ‘%conflict%’ and type=’u’

OPEN ImageCursor
FETCH NEXT FROM ImageCursor into @objname
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @message = ‘Drop table ‘ + @objname
PRINT @message
— execute (@message) — recommend don’t execute directly, but copy output to new
windows to run
FETCH NEXT FROM imageCursor into @objname
END

CLOSE imagecursor
DEALLOCATE imagecursor

—————————–

Select name from sysobjects where name like ‘%conflict%’ and type=’u’
SELECT @message = ‘Drop table ‘ + @objname

Select name from sysobjects where type = ‘tr’ and status < 0 and category = 2
SELECT @message = ‘Drop Trigger ‘ + @name

Select name from sysobjects where name like ‘sp_%’ and type =’p’ and category = 2

SELECT @message = ‘Drop Procedure ‘ + @name

Select name from sysobjects where name like ‘tsvw%’ and type =’v’ and category = 2

SELECT @message = ‘Drop View ‘ + @name

Select name from sysobjects where name like ‘ctsv%’ and type =’v’ and category = 2

SELECT @message = ‘Drop View ‘ + @name

Select name from sysobjects where name like ‘MSMerge%’ and type=’u’
SELECT @message = ‘Drop table ‘ + @objname

———–IF there are objects created with a different logon (i.e. owner is not
dbo), the drop statement can be modified to include the user name.
prefix———-

Select name from sysobjects where type = ‘tr’ and status < 0 and category = 2
SELECT @message = ‘Drop Trigger <username>. ‘ + @name

Reference:

Problem Description:
You are trying to remove merge replication from your SQL Server 2000 SP3 instance
and sp_removedbreplication fails with a “does not exist” error when trying to drop
one of the conflict tables.

Criteria for Resolution:
Remove merge replication from the database ServiceDesk5 on your development system
without the above error.

Steps Taken:
* Manually removed replication related objects:
Select ‘drop table ‘ + name from sysobjects where name like ‘%conflict%’ and
type=’u’
Select ‘drop trigger ‘ + name from sysobjects where type = ‘tr’ and status < 0 and
category = 2
Select ‘drop proc ‘ + name from sysobjects where name like ‘sp_%’ and type =’p’ and
category = 2
Select ‘drop view ‘ + name from sysobjects where name like ‘tsvw%’ and type =’v’
and category = 2
Select ‘drop view ‘ + name from sysobjects where name like ‘ctsv%’ and type =’v’
and category = 2
Select ‘drop table ‘ + name from sysobjects where name like ‘MSMerge%’ and type=’u’

* Followed the steps in the below article so that your user objects were no longer
marked as being used in replication.
326352 PRB: You Receive Error 3724 When You Drop a Non-Replicated Object
* Verified with “select * from syscolumns WHERE colstat & 4096 <>0” that none of
the columns were marked for replication. 811899 PRB: Error 4928 Occurs When You Try
to Rename a Non-Replicated Column

Posted in REPLICATION | Leave a Comment »

How to reset Log Reader Agent if LSN no longer matches.

Posted by database-wiki on April 11, 2016

Problem Description:

Publisher: {call sp_repldone ( 0x00000000000000000000, 0x00000000000000000000, 0,
0)}
Status: 0, code: 0, text: ‘The process could not execute
‘sp_repldone/sp_replcounters’ on ‘BORGSQL1′.’.
The process could not execute ‘sp_repldone/sp_replcounters’ on ‘BORGSQL1’.
Repl Agent Status: 6
Status: 0, code: 18768, text: ‘The specified LSN (00000000:00000000:0000) for
repldone log scan occurs before the current start of replication in the log
(000828b2:00003657:0003).’.
Status: 0, code: 22017, text: ‘The process could not set the last distributed
transaction.’.
Disconnecting from Publisher ‘BORGSQL1’

Resolution:

Executed “dbcc opentran” on the published database. There were open
transactions.

Transaction information for database ‘Gemini’.
Replicated Transaction Information:
Oldest distributed LSN : (534706:13911:3)
Oldest non-distributed LSN : (534706:13913:1)
DBCC execution completed. If DBCC printed error messages, contact your system
administrator.

—————————————
We did not want to execute sp_repldone null,nul,0,0,1 since this would mark the
pending transactions as replicated. They would not get moved to the subscriber.

Instead we updated the Distribution database with the proper values from the
transaction log.

For EXAMPLE:

To get the log reader running again, you can update msrepl_transactions with the
values obtained from sp_repltrans by using the following procedure:

USE published_db
GO
sp_repltrans

/*The output gives the xact_id and xact_seqno for the next replication transaction.
If no rows are returned, create some dummy INSERT or UPDATE into any published
articles, and then run sp_repltrans again. If multiple rows are returned, get the
xact_id and xact_seqno from the first row. */

USE distribution
GO
SELECT id FROM MSpublisher_databases WHERE publisher_db = ‘publisher_db’

/*Use the output from sp_repltrans and the id for publisher_db in
MSpublisher_databases table to update the msrepl_transactions table.
For example, the output from sp_repltrans is:

xactid xact_seqno
———————- ———————-
0x000000110003028E0001 0x000000110003028E0004

The id from MSpublisher_databases in this example is 2. Then run the following
queries to update the MSrepl_transactions table:
*/

EXEC sp_configure ‘allow updates’,1
GO
RECONFIGURE WITH OVERRIDE
GO

/*
Before this step, make sure all the distribution agents to the publisher_database
run successfully and have applied all jobs to subscribers.
You use select into to create a copy of the distribution..Msrepl_transactions table
in the Pubs database. You must enable Pubs database to allow select into db option.

*/

USE distribution
GO
SELECT * INTO tempdb..Msrepl_transactionsBackup
FROM distribution..Msrepl_transactions
GO
DELETE FROM MSrepl_transactions WHERE publisher_database_id = 2
GO
INSERT INTO MSrepl_transactions
VALUES (2,0x000000110003028E0001, 0x000000110003028E0004, 0)
GO
EXEC sp_configure ‘allow updates’,0
GO
RECONFIGURE WITH OVERRIDE
GO

To verify open transaction output execute:

dbcc opentran

Output:
——————–
Replicated Transaction Information:
Oldest distributed LSN : (25:476:7)
Oldest non-distributed LSN : (26:94:1)

Then execute :

sp_repltrans

Output:
———————

xdesid xact_seqno
———————- ———————-
0x0000001A0000005E0001 0x0000001A0000005E0007

The xdesid matches the oldest non-distributed transaction. In other words, the
next transaction to be picked up by the log reader.

0x0000001A 0000005E 0001
26 94 1

Posted in REPLICATION | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 175 other followers