-- This demo can only be run on SQL SERVER 2012 or higher because of the LAG/LEAD function calls USE <database_name>; GO CREATE TABLE #LagLeadSource ( ID INT IDENTITY(1, 1) NOT NULL , Value INT NOT NULL ); INSERT INTO #LagLeadSource ( Value ) VALUES ( 123 ),( 234 ),( 345 ),( 456 ),( 567 ),( 678 ),( 789 ),( 890 ),( 901 ); SELECT ID, LAG(Value, 4, 0) OVER ( ORDER BY ID ASC) AS Next_x_4_LowestValue, LAG(Value, 3, 0) OVER ( ORDER BY ID ASC) AS Next_x_3_LowestValue, LAG(Value, 2, 0) OVER ( ORDER BY ID ASC) AS Next_x_2_LowestValue, LAG(Value, 1, 0) OVER ( ORDER BY ID ASC) AS NextLowestValue, Value AS 'ACTUAL_VALUE', LEAD(Value, 1, 0) OVER ( ORDER BY ID ASC) AS NextHighestValue, LEAD(Value, 2, 0) OVER ( ORDER BY ID ASC) AS Next_x_2_HighestValue, LEAD(Value, 3, 0) OVER ( ORDER BY ID ASC) AS Next_x_3_HighestValue, LEAD(Value, 4, 0) OVER ( ORDER BY ID ASC) AS Next_x_4_HighestValue FROM #LagLeadSource -- using a WHERE clause here will "break" this code as the dataset will have a row-restriction applied before the LAG/LEAD functions are ever called, rendering them useless IF OBJECT_ID('tempdb..#LagLeadSource') IS NOT NULL DROP TABLE #LagLeadSource;
-- Get all MS_Description metadata values for ONE DATABASE USE <insert_database_name>; GO SELECT u.name + '.' + t.name AS 'Schema/Table Names' , -- td.value AS 'Table Desc' , c.name AS 'Column Name' , cd.value AS 'Column Desc' FROM sysobjects AS t INNER JOIN sysusers AS u ON u.uid = t.uid LEFT OUTER JOIN sys.extended_properties AS td ON td.major_id = t.id AND td.minor_id = 0 AND td.name = 'MS_Description' INNER JOIN syscolumns AS c ON c.id = t.id LEFT OUTER JOIN sys.extended_properties AS cd ON cd.major_id = c.id AND cd.minor_id = c.colid AND cd.name = 'MS_Description' WHERE t.type = 'u' AND NOT cd.value IS NULL ORDER BY t.name, c.colorder -- Get all MS_Description for ONE TABLE SELECT c.name AS Field, t.name AS Type, c.Precision, c.Scale, c.is_nullable, c.collation_name FROM sys.columns AS c INNER JOIN sys.types AS t ON t.system_type_id = c.system_type_id WHERE object_id = object_id('<insert_table_name>') ORDER BY column_id -- Add MS_Description to ONE TABLE EXEC sys.sp_addextendedproperty @name = N'MS_Description', @value = N'Street address information for customers, employees, and vendors.', @level0type = N'SCHEMA', @level0name = 'Person', @level1type = N'TABLE', @level1name = 'Address'; GO -- Add MS_Description to ONE COLUMN EXEC sys.sp_addextendedproperty @name = N'MS_Description', @value = 'Postal code is a required column.', @level0type = N'SCHEMA', @level0name = 'Person', @level1type = N'TABLE', @level1name = 'Address', @level2type = N'COLUMN', @level2name = 'PostalCode'; GO -- Update MS_Description on ONE COLUMN EXEC sys.sp_updateextendedproperty @name = N'MS_Description' , @value = 'Employee ID must be unique.' , @level0type = N'SCHEMA', @level0name = 'dbo' , @level1type = N'TABLE', @level1name = 'T1' , @level2type = N'COLUMN', @level2name = 'id'; GO -- Update MS_Description on a SCALAR FUNCTION -- function name is just an example EXEC sys.sp_addextendedproperty @name = N'MS_Description', @value = N'Performs LTRIM(RTRIM()) on input string. Optionally loops to convert all double-spaces to single-spaces.', @level0type = N'SCHEMA', @level0name = 'dbo', @level1type = N'FUNCTION', @level1name = 'fn_Trim'; GO -- Update MS_Description on a parameter of a scalar function -- function name is just an example EXEC sys.sp_addextendedproperty @name = N'MS_Description', @value = N'Should all extra (double) spaces be removed?', @level0type = N'SCHEMA', @level0name = 'dbo', @level1type = N'FUNCTION', @level1name = 'fn_Trim', @level2type = N'PARAMETER', @level2name ='@StripExtraSpaces'; GO
USE <database_name>; GO SELECT u.name + '.' + t.name AS 'Schema.Table Name' , CAST(td.value AS VARCHAR(5000)) 'Table Description' , c.name AS 'Column Name' , CAST(cd.value AS VARCHAR(5000)) AS 'Column Description' FROM sysobjects AS t INNER JOIN sysusers AS u ON u.uid = t.uid LEFT OUTER JOIN sys.extended_properties AS td ON td.major_id = t.id AND td.minor_id = 0 AND td.name = 'MS_Description' INNER JOIN syscolumns AS c ON c.id = t.id LEFT OUTER JOIN sys.extended_properties AS cd ON cd.major_id = c.id AND cd.minor_id = c.colid AND cd.name = 'MS_Description' WHERE t.type = 'u' AND NOT cd.value IS NULL ORDER BY t.name, c.colorder GO
Use the following small script to find all hidden triggers in a SQL Server database. Pretty much all competent developers using SQL agree that using triggers is a bad practice, but few know how to find them before they have wasted a lot of development and testing time not understanding why things aren’t working the way they want them to (because of triggers doing behind the scenes work, undesirably).
-- Finds all "hidden" triggers in a database. Using triggers is not a best practice. USE <database_name>; GO SELECT name AS [TriggerName] FROM sys.objects WHERE OBJECTPROPERTY(object_id, 'IsTrigger') <> 0;
Here’s some information on SQL Prompt from RedGate, a kickin’ plug-in for SQL Server Management Studio.
This tool is not free – but it’s certainly a productivity enhancer in the long-run for writing clean and easy-to-read SQL with reusable and shared script components (snippets) and sharing team best-practices for SQL. Cost is $369 per seat-license (last I checked) including perpetual upgrades (one year time-limit?); site-licensing is certainly more cost-effective per seat. It’s little brother SQL Search is free, and it’s excellent also. I have used RedGate products including this one for five years and highly recommend this one in particular.
- Reusable common script components or shared snippets that an entire team can edit and share from a common network-share location.
- On one project we had 50+ script components that several coders shared to increase both code quality and time-to-completion.
- Having pre-written blocks or snippets of code really speeds things up
- Meta-data discovery for database objects “Intellisense” style, as you type, including MS_Description (extended-property) values along with object type-characteristics
- More robust Table/View/Column auto-completion, Visual Studio style
- Better script parsing for errors (red-squigglies under errors)
- Supports script refactoring with context-menu (right-click) support
- Supports script formatting with default or custom rule-sets, which helps to improve readability of the average SQL code-base
- Search the database for all scripts using “<search term>”, right within SSMS
- When you install this product, it installs as both a SSMS and Visual Studio plug-in, so you have the functionality for editing SQL in both editors
I can say without hesitation that this product greatly enhances coding productivity and release-script quality for SQL scripting.
They allow for a free 28 day evaluation version for those new to the product.
What is metadata, and why does it matter? Research shows that knowledge workers spend at least 30% of their work time searching for the right information, let alone analyzing or manipulating it.
I have seen this scenario repeatedly play out: “A developer who used to work here developed a database which is still in production use daily, but nobody 100% understands some of the more vague columns/fields in a lot of those tables. It never got documented and now that we are a bigger company, we’re trying to make sense of both the data structure and the code that depends on it.”
It’s so common, it’s cliché. The structure of the tables was put together by a developer with good intentions, but the column names might be very vague within the context in which they are used, like ‘Price’ or ‘Name’. That developer knew what she meant, but noone else does! Perhaps the company’s evolving business model is outgrowing and out-pacing the original data-model, or the column names were just poorly chosen. Should ‘Price’ have been named RetailPrice or WholesalePrice? ‘Name’ – LastName, FirstName, MiddleName, FullName, TechnicianName, or CustomerName?
Sometimes the name of the table containing the column gives enough context, more often it doesn’t.
With ‘Name’ you might just figure it out by looking at the actual data stored or the stored procedures that manipulate it. With ‘Price’, it might take a lot longer as you look at the calculations in various stored procedures, and ask business users their understanding of the intent of how the data was stored. Sometimes you never will get a straight answer. For ‘Percentage’, was that a discount percentage (retail, wholesale?), an upcharge percentage, commission percentage? You’re spec’ing or writing a tax report — is that ‘decTax’ a decimal-based column for use-tax or sales tax?
It’s the business’ accountability to give semantics or meaning to data, but it’s the responsibility of information workers to apply that meaning through metadata. Somewhere in the middle is usually a business analyst working to ensure the requirements for software creation or change are written correctly, and that meanings are captured and translated correctly for maximum understanding by IT. Sometimes there is no analyst and it all falls on a developer’s shoulders. The importance of this semantic layer in application development cannot be underestimated.
Fortunately, the solutions are not hard to implement
Fortunately, these time-consuming problems of reverse-engineering old code are very avoidable at the start with a little discipline, and if not avoided in the first-place, can be mitigated over time with tools that already exist and are sometimes free, provided enough of the right people have correct knowledge of the data use-context. Data-architecture reviews during development can solve these conundrums, but more often than not, a developer working closest to the data at the time it’s developed can take just a little more time to get this setup in a better manner and “future-proof” the code to the greatest extent possible at the time of creation.
From a SQL Server and .NET perspective, I have seen scores of database tables/columns, C#, and VB.NET classes with no metadata to describe the creator’s intent and use-cases. This information about the data stored is crucial to using these data structures correctly – in-turn, allowing software architects, business analysts, database administrators and developers to ensure a correct and mutual understanding of the data they are storing and manipulating.
For C# code, XML doc (documentation) is applied to classes, class-properties, and class-methods via the /// auto-completion trigger. For VB.NET, use the ”’ auto-completion trigger. In SQL Server, similar metadata is typically stored in the MS_Description field, but more metadata fields can be created and customized beyond that, per column, table, or other database object.
Data-dictionaries & Intellisense
Consistently applied, this metadata can be used to drive third-party tools that will pick it up and turn it into data-dictionaries or automatic code documentation that can be periodically refreshed via a scheduled job, for use on a Confluence server, Sharepoint, or other internal website. Think about it – a single-source of truth for what each table and column represents in your SQL Server, after the many developers and DBAs that have modified the database over the years. A single source of truth for the meaning and proper use-cases of every class, property, and method of every namespace in your company’s .NET code. No more having to do extensive code research to figure out by use-context what that field in your table (added ten years ago) really stands for, then not writing it down, and then someone else having to go through that process all over again five years from now when a vendor implementation is underway.
Applied at the database level, plug-ins into SQL Server Management Studio like SQL Prompt will override the very weak Intellisense provided natively and pick up this metadata and display it as you create and modify stored procedures, views, functions, etc. When you are dealing with anything more than a trivial database, this is a huge boon to productivity and coding accuracy. It’s the difference between knowing whether the column named ‘Price’ in the ‘Product’ table refers to a MSRP price, discount retail price, or wholesale price, and whether the internal applications use this column and/or the public-facing website. Knowing that information about the data is absolutely crucial to a software project’s success and a reduction in labor cost.
I think this graphic from eRSA in Australia sums up the need for metadata well, though it comes at it from the perspective of a large research university:
(see below the large graphic for links to my posts where I describe where to view and how to update the metadata in SQL Server)
When I talk about “coffee” I mean REAL coffee, the dark, rich stuff, and not just “hot brown water”, my term of “dis-endearment” for office-coffee, church-coffee, or your average low-grade convenience store coffee. Yuck.
I generally prefer to use a normal drip coffee-maker, and I never make coffee with anything other than distilled or purified water (better) anymore to get away from the chlorine and fluoride, which influence the taste of the brew. I recently acquired a basic Keurig model (used; first generation), about five years after everyone else seems to have bought one. I like them – maybe even a lot – but they won’t replace my drip coffee-maker for the purity and quality of brew. Keurigs are convenient, but you pay a penalty in the quality of the brewed coffee in my opinion.
That having been said, the Keurig is very convenient for my morning coffee especially, when I am under more of a time crunch. Good coffee for me via K-cups these days comes from:
- Gevalia – Dark Royal Roast: Like liquid dessert, but not quite as good as Seattle’s Best, Blend No. 4
- Folgers – Black Silk: My daily standby in absence of the Gevalia
- GreenMountain – Dark Magic: Wow, that was strong…Wow!
- Cameron’s Coffee – Highlanger Grogg: Too light a roast, but delicious anyway
- Cameron’s Coffee – Cinnamon Sugar Cookie: Same; delicious
- McCafe – Premium Roast: As good as the drive-thru coffee; the other McCafe brews have a strange aftertaste, IMHO
That was in order of preference if I have all of them at the ready in my supply drawer. However, if making drip coffee, my hands-down preferences are:
- Seattle’s Best – Blend No. 4: Seriously, like drinking liquid dessert; *not* available in a K-cup as of early 2017
- Folger’s – Black Silk: Good stuff at a great price
Thought you might like to know my java preferences.
Agree / disagree? What do you think makes for good coffee?