Wednesday, June 29, 2016

Uninstalling SQL Server from the Command Line to Remove Unwanted Background Instances

I had a note (or ‘action item’) to make sure that people were informed regarding the ‘uninstall’ of SQL Server, since non-DBA server administrators would like to know more detail on this.  The process is not obvious from Control Panels’ Programs and Features (long list of components), so here’s my preferred method (*)which allows for a more specific uninstall, which can also include all the components at the same time (below you see these as the /Features list).  This can also save DBAs too, since there have been two occasions over the past three or so years, when the wrong instance name was installed, and we had to remove, reboot, and reinstall with the correct one afterwards. 
First thing you need is locate the SQL Server installation media, which is internally stored on our network shares for DBAs, or available from download on Microsoft’s site - just be sure to grab the correct version. If you don’t know the version you have installed, open up from the start menu Microsoft SQL Server, Configuration Tools, and finally SQL Configuration Manager. SQL Server Services…and in the right pane, if you see Service Type SQL Server, right click and view the properties. At the bottom of the Advanced page it should be clearly indicated what version is installed:
SQL Server Configuration Manager (2014)

Thus, since the version is 10.0.1600, we can see from that we’ll need to download the same media to remove it properly  (and I’m making sure I get rid of this instance on my workstation b/c I really do not need a vulnerability hanging around that I surely can try and remove). Here's the general format for the uninstall command with several components in the example:

SQL 2008 express with SP1 was downloaded matching the X86 version installed (as we can see in the image above). I noticed, after running the downloaded executable afterwards, As Admin, that a folder was created C:\e2f2e9f6f311cfa21667f9 (sort your primary disk by date, it should be obvious).  If you have trouble finding it, find and run the executable for setup (it’s the same directory).
Then I went to command line under this directory and pasted the uninstall command from above, with the actual instance name, and with the necessary feature I have to remove (in my case, only a single component):

And then all the dialogue boxes open up, since we asked to indicate progress with the /indicateprogress switch, which means that you’ll still see the command line running still behind the GUI, and can even find the sql server install bootstrap logs afterwards for review, in the case of error. Mine were stored to:
C:\Program Files (x86)\Microsoft SQL Server\100\Setup Bootstrap\Log\20160627_142130\
SQL Server Removal/Install Setup Rules
As of this status window above, I should be safe to proceed:
Chose the MicrosofSCM instance, and confirm the engine is to be removed. If we don’t know all the components installs, here’s the reason we use the show installation progress switch – it thankfully will tell you all that are related to this install automatically (so Select all if you want, in my case the SDK wasn’t the issue):

Confirm the rules, then finally, finally,

And of course, the configuration file path is there again under setup bootstrap.
Rejoice when you see that it has almost finished, and confirmed during the removal process.
It may not say it, but it sure is good to reboot after removing this instance and confirm upon restart.
Success, and quite a relief (bet you have rarely seen such a long uninstall, right?).

Note that as soon as you’ve exited the installation, the temporary folder that was created for the media is now gone. If you really want to clean up fully, delete the SQLEXPRE_x86_ENU.exe file from your downloads folder (or your respective version, SQLEXPRE_x64_ENU.exe for that platform).

Happy Uninstalls to you, and Happy Canada Day!

* Nota Benethis DB instance I am picking on, is just an example, not to say everyone must run out and remove it, since it could be in use by your workstation admins!

Friday, July 17, 2015

Speaking next Wednesday July 22nd, at Vermont SQL PASS group, thanks to Roman Rehak​ for the invite!

Speaking next Wednesday, Vermont SQL PASS group, thanks to Roman Rehak​ for the invite!
I'll be doing an update to Database Security Best Practices for the Vigilant DBA and Developer, with a look at SQL 2016 CTP2.1 Always Encrypted option.
UPDATE: Post presentation link to download Slide Deck

Wednesday, June 17, 2015

MySQL Database Security Best Practices for the Vigilant: The Terse Top XV

1. Strong passwords, there is no excuse, use online tools for generation of 15 alphanumeric or more.
2. Rename root account (update user set user=”NewName” where user=”root”)
3. Apply the latest MySQL DB Server patches (see 12)
4. Restrict access to Program Data and log (slow, error) folders
5. Make sure users connect only from specific ids/IPS or application servers.
6. DB users, grant privileges from most restrictive upwards, not full access then down.
7. Use Skip-Networking in Configuration file if the server is to be only used locally.
8. Configure disaster recovery with Bin Log replication to remote site if server is critical, or replicate backups at the very least and load them to a standby server.
9. Use SSL connections only if sensitive data (Set Secure_Auth ON)
10. Change the TCP port to a non-standard one (thus not 3306)
11. Restrict the access and ownership of the DataDir in the Configuration to place data in a non-default place.
12. Use Versions that were after the Oracle purchase AND post known vulnerabilties, thus 5.6.26 and up, since (update) August 2015, new vulnerabilities were published for 5.6.24 and below (ouch).
13. Ensure root has not been granted to remote access (SHOW GRANTS)
14. Ensure there are no empty passwords
15. Groom your user lists frequently and disable/drop unused accounts

PS while on the subject, here's some a
wesome MySQL training

Sunday, January 04, 2015

Improve Your Wifi Connection Speed with the Use of a Simple 6’/2m USB Cable Extension

For years now we have dropped using Cat6 at home for Wifi, since the Cat5e cables in the house we moved into four years ago were damaged by construction and renovations. Even though we've purchased high end routers that claim amazing speeds, and even the other year a new Cisco router with AC capabilities, the main thing that has really improved connectivity has been the use of a high quality shielded USB 2.0 extension cable (and once Wifi AC has thoroughly set in, a USB 3.0 cable of the same quality would follow suit).

When you are close to the router, this is not an issue since speeds of 172-225Mbps are able enough for most network traffic, but the extension really pays itself off when you are far from your router where interference from your body in the way makes a difference!  Even though our router is placed close to the ceiling of the basement, well above the ground, and slightly higher than the cement foundation of the house, when we connect two floors up in the home office from the desk in the side of the house, Wifi speeds drop to down to 30Mbps.  At this point, we’re barely higher than the maximum download speeds and if you are on an encrypted connection to your servers at work, the connection speed is subject to frequent interruptions or unbearably slow. That’s when I looked at the pile of USB extension cables I had lying around from a recent machine rebuild, and the Wifi USB adapter that was collecting dust since purchase of the router (included in the Cisco Router bundle).  As most users of a laptop, I had pre-determined that the internal Wifi adapter was presumably good enough – indeed I was most incorrect.  

After adding the USB cable extension with the idle USB Wifi adapter to my laptop, and extending in the direction of the router across the desk and dangling down to the floor, connectivity speeds increased to between 72-98Mbps (at least two and half times the speed). At this point, the flaky VPN connectivity to servers went away.  I also made sure to extend the cable away from anything that would cause interference – and I would suggest moving around the final location of the UBS adapter at the end of the extension to find your respective sweet spot for the communication to the router.

Wednesday, September 10, 2014

A Wonderful Weekend in Riyadh, Kingdom of Saudi Arabia, Speaking at SQLGulf #1, Alfaisal University

Thanks to the perseverance of SQL MVP Shehap El-Nagar’s invite last year, this brave man from the Saudi Ministry of Higher Education, managed to organise a great inaugural event.  After the longest visa request process in my life, and twice, since originally his goal was to host this event in December 2013, it finally happened: - the 1st SQLGulf event held in Riyadh, K.S.A. this past Saturday at the beautiful new campus of AlFaisal University, Riyadh. Link to Highlights of the event

As one of six speakers, I gave two Presentations:
1) SQL Server Database Security Practices (on LinkedIn, or on SkyDrive ) with code sample.
AlFaisal University's courtyard (built 2008)
2) Compression for SQL Server (SkyDrive) but for more on compression, please see the updated background blog post for code and details.
For Saudis, the weekend begins on Friday, and thus we faced the challenge of attracting people to give up their Sunday. It is all a little confusing for outsiders to the Kingdom, since this change is only recent. U.K.-based SQL MVP Satya Shaym Jayanty explained to me how a previous SQLSaturday event where he was speaker actually was on Thursday (!) since that was the beginning of the weekend for them, and this time around, we were able to hold it on an actual Saturday (but is still to Saudis, the last day of rest before the new week begins).  This cultural difference proved helpful for us Internationally-based speakers since it only involved leave from work for a day or two, including travel time.

International speakers joined us from Jordan, the United Kingdom, Egypt and the U.S.A.also, and I was the token Canuck. Here was the full schedule of sessions:

What AlFaisal University look like once finished. 60% done now
  I think this was the first event where I finally broke through with speaking (based on the feedback), because experience has certainly done its part. I am a long way since SQLTeach in Vancouver, 2009 in June where there was a tough crowd and one man who seemingly had a rough flight in from the UK and trashed all the speakers - who were not paid, and came to simply give back to the community.

 More photos of the short, but wonderful trip.  Especially a whole afternoon tour with Ahmed, a great taxi driver originally from Peshawar, Pakistan, who made sure to let me see all these beautiful parts of the capital.
The Prince`s SkyBridge, for great views.
AlFaisal's ultra-modern Mosque

Saudi Ministry of Higher Education
 In Riyadh, for SQL Gufl 1, all sessions were recorded in high definition (as have many already Shehap has organised before for SQLSaturday Riyadh), thus I leave it up to readers to decide whether I can honestly say that the presentation made the grade (link to come).  Shehap, who works in this building to the right, was certainly not alone in his efforts (see group shot below) to start this series of events which we hope will jump from city to city across the Persian Gulf. 

The Entire SQL Gulf #1 Team at the end of the day, AlFaisal University Saturday, 30th, 2014

After first presentation with Mostafa Elmasry, blogger at

To come: SkyBridge view, and National Library.
Needless to say, cannot wait for the next SQL GULF!!!!

Selfie, in front of the late King AlFaisal's Palace, which is surrounded by the new University of the same name

Friday, August 22, 2014

Security Updates released in MS14-044, and An Approach to Microsoft’s Incremental Servicing Model

On August 12th, 2014, another infamous ‘patch-Tuesday,’ Microsoft released a series of Security Updates for multiple versions of SQL Server, to deal with potential Denial of Service attacks and an exploit in Master Data Services. After attempting to make my way through hundreds of instances already, and all the respective environments, with a recent applicable Cumulative Update, the release of all these Security Updates has most definitely thrown a wrench in the patching plans. Here are the details for this specific bulletin.

The question is, if you’re a DBA, how to you make sense of all the Cumulative Updates (CUs which contain critical on demand updates from clients), Service Packs (SP), Security Updates (SU), General Distribution Releases (GDR), and the acronym I have only noticed recently - QFE (most have heard of hotfixes, but this particular one means a Quck Fix [from] Engineering) updates. This is where this explanation of Microsoft`s Incremental Servicing Model from the SQL Server Team steps in to help, in fact, after 15 years of administering SQL Server, I have not found a page with such an updated description of how SQL is patched, and this is thanks to a recent visit from Felix Avemore, a Microsoft SQL Server Premier Field Engineer based in Quebec City.

For Microsoft Premier Field Engineers for SQL Server, it’s clear, your priority is to apply important Security Updates before anything else, but often those updates require the application of a CU or an SP as a pre-requisite, which makes patching a painful affair when you have the daunting task of updating 3-400 servers!  That is where updated/clear documentation, system backups, and checklists come in rather handy, and perhaps deeper recommendations from the vendor to validate registry keys if your system is in production and ultra-sensitive. If ever you arrive with a corrupt master, attempt restore but always remember you can rebuild the instance cleanly with the exact Configuration.ini file found within the setup bootstrap folder (please see a previous post on command line installs for more).
To decide which updates to apply, depends on what build you are at,
therefore for 2008-2014, here’s a quick guide:

SQL Server Version
General Distribution Release (GDR)
Quick Fix [from] Engineering (QFE)
SP1 (without any CUs)
SP2 (..)
CU1 - CU2
SP1 CU1-11
SP2 CU1-13
2008 R2
SP3 (..)
SP3 CU1-CU17
Note that If you are on SQL 2014 RTM CU3, or SQL 2012 SP2 you are covered already at those build levels.

There are clear arguments, as laid out well by Glenn Berry here, that you should apply recent Cumulative Updates to maintain proper performance, stability, and regular maintenance of your SQL Server Instances.
Are QFEs cumulative? By their build level, it would appear so, and after reading several definitions, I can confirm that they are indeed cumulative hotfixes also.

Hope this clears up some of the muddy pathway you`ll find attempting to keep up with patches on SQL Server. 
Happy Patching

This post was given a mention in 's edition of September 1st, 2014:

Tuesday, May 13, 2014

How to Fix that Never-Ending Join Dragging Down the Whole DB Server Instance – Pre-Populate the Largest Bugger in Temp with Indexes

Now that it has been a good five years I have been blogging away here and on SSC, the editors recently thanked us for our work. They also provided valuable feedback that we should give real-world situations that DBAs encounter.  The following has a target of optimising performance, from an actual task that has re-occurred several times since I first wrote on the subject, in various production environments, on an instance that is bogged down by that one massive query within a stored procedure that has to run all the time, yet is so huge, important and/or complex everyone is afraid or unsure how to resolve.

In this post I hope to clearly explain how the combination of the use of data definition language for your temporary tables and non-clustered indexes can improve the performance of stored procedures that join data from one or many large tables by up to seventeen times (at least that was the case previous time I saw this type of query to optimise) - as I have seen on stored proc.s that work with tables in the tens of millions.

Temporary tables, if used frequently or in stored procedures, in turn, end up with significant input/output disk consumption. To start, one thing we should be aware of is that they are also created as a heap by default.  As experience has shown, if you are cutting up a very large table and using the temporary database, it is best to first do your DDL (data definition language) before running through the rest of your operation with the temporary data - as opposed Select * INTO #temp. Thus, we should avoid Select * into #temp as much as possible, unless the number of rows is insignificant, because being in a single statement, it will create great disk contention within the temp database:

(N.B. the assumed pre-requisite is that you've identified the worst query from your plan cache or have seen the code from Recent Expensive queries listed in Activity Monitor, sorted by worst performing resource)

CREATE TABLE #MyLot  -- you’ll see that we only need a few columns join in the end
       [LotId] [int] IDENTITY(1,1) NOT NULL,
       [LotCode] [nvarchar](10) NOT NULL

INSERT into #MyLot ( LotId, LotCode )
 -- e.g. you can also avoid NULLs by using ISNULL(col,0)
Select LotId, LotCode
from MyBigLotTable
Where clause / matching variables
 -- this is where you found out what joins this massive table with the others and slice it up
 -- horizontally and vertically
efore (!) making that big join,
 -- and that is where we obtain the significant performance gains

Create NonClustered Index #idx_Lot on #MyLot ([LotCode] ASC )
-- create index on matching column used in the 'big' join (this case it was a 5 table join)
-- the glaring ID field
-- integrate all this preparation of #MyLot into the main super slow query

[BIResult].[Number], [Loc].[LocId], [BLoc].[BILocId],[BIResult].[LotCode], #MyLot.[LotId],[BIResult].[PCode],[P].[PId],[BIResult].[Stock],ISNULL([BIResult].[StatusCode],[BIResult].[UnitCode]
                              [Number] SMALLINT N'@Number'
                              [LocID] NVARCHAR(10) N'@LocID'
                              [PCode] NVARCHAR(18) N'@PCode'
                              [LocCode] NVARCHAR(4) N'@LotCode'
                              [PCode] NVARCHAR(10) N'@LotId'
                              [Stock] NVARCHAR(MAX) N'@Stock'
                              [StatusCode] NVARCHAR(3) N'@StatusCode'
                              [UnitCode] NVARCHAR(1) N'@UnitCode'
                        ) AS [BIResult]
                JOIN [Pt] ON [Pt].[Number] = [BIResult].[Number]
                LEFT JOIN #MyLot --[Lot] was here before, the huge table
                            ON #MyLot.[LotCode] = [BIResult].[LotCode]
                JOIN [P] ON [P].[PtId] = [Pt].[PtId]
                 AND [P].[PCode] = [BIResult].[PCode]
                JOIN [SLoc] ON [SLoc].[PtId] = [Pt].[PId]
                 AND [SLoc].[SLocCode] = [BIResult].[SLocCode]
                JOIN [BLoc] ON [BLoc].[LocId] = [Loc].[LocId]
                 AND [BLoc].[BLocCode] = [BIResult].[BLocCode]
               WHERE CAST([BIResult].[Stock] AS DECIMAL(13)

-- always explicitly/clearly drop the temp at the end of the stored proc.
drop table #MyLot -- and the respective index is dropped too with it

Happy optimising!

Shed the heavy weight of that extra slow query bogging your instance down.

Tuesday, January 14, 2014

Microsoft Project Migration Template for the move to SQL 2012

For those planning a move to SQL Server 2012, although this process can apply for many Database Migrations, perhaps this Microsoft Project Migration Template could help? *
In this plan, there are so many possible steps, it is better to trim down from too many to just those applicable instead of missing steps. As an experienced migrator, you may already know of an even better order of tasks to accomplish a successful migration. By all means share with us below in the comments.   My approach here is to ensure that I delve well into the domain of Project Management as a DBA must/should do from time to time, so that if an official project manager is assigned to you, this document could be handed over to them as a good way of organising the upcoming work.
A quick nota bene at this planning stage of a project is that you should not skip the time estimations, which in turn lead to the analysis of the critical path.  There might be a point where you have to pool resources with another team-member or pull in a consultant to ensure project delivery timelines are met.  Somewhere along the critical path of the project you might want to take this proactive step to mitigate deadline risk. In this way, whole project planning with respect to time estimations is a predecessor to accomplishing this task.
And sorry, for the notes sometimes being in FR, I just tend to mix up the sources/references often enough. This template has a little bit of history. While migrating databases in early 2005 for the Artificial Insemination [of cows] Centre of Quebec (CIAQ)  Mathieu Noel (his profile on helped me out greatly while writing this document. This version has had major revisions four times so far, with the most recent being this one for SQL 2012.   
 * To view an MPP file without installing Project itself, you can use this free viewer. Exports of the Migration project plan to PDF and Excel are also available on my SkyDrive.
PS as with all migrations, one should constantly try and adhere to the keep it simple rule (K.I.S.S.). Even this old post about a simple Oracle 10 migration to SQL Server 2008 was no exception, so what we did from the very beginning was create insert scripts of all the data into the tables (not a huge database, in the tens of megabytes only), since the schema export was already done for us by a vendor (to which I only had to do minor tweaks to appreciatively).  Before actually going through each table insert script one by one to adjust the fully qualified table names, add Set Identity_Insert On/off statements, with a quick truncate before the begin tran/inserts/commit batch, I had scripted out, in a previous step, all the drop/create foreign key and constraints statements to bring all the data in quickly without per-table FK/Constraint drop/recreation.

Monday, December 16, 2013

SQL Server 2012 AlwaysOn Presentation Given at Vermont PASS SQL Server User Group Last Week

This past Wednesday evening, 11th Dec, I braved the snowy roads down from Montreal to Winooski (Burlington area) to join a very friendly crowd at MyWebGrocer and presented all I know about AlwaysOn for the Vermont Professional Association for SQL Server, run by Roman Rehak.
I shall be placing an AlwaysOn script shortly, once I have cleaned up the code - however for those who were there, Roman Rehak was provided with all the files to redistribute also.
If the presentation link is blocked for you, please try this one on LinkedIn's server (see right after summary).
Thanks again to Roman and especially My Web Grocer for sharing its amazing work space with us.

Wednesday, December 11, 2013

SQL Server Installation Folder Setup Log and Command Line Install Information

The other morning I was commenting during our regular meetings amongst fellow DBAs on where to read up on installation issues, or for simple text validation of what components were added during an installation process. This is the folder I want to point out: \\rootDrive:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\Log

Every install action, whether an Add of a node, settings validation, repair, remove, uninstall appears in this folder, and as soon as you have installation issues, go right to the specific Detail.txt under the Log folder corresponding to the date and time an installation was executed. You will see that Summary.txt has very little information, which is self-evident by its name.

For those of you who have a solid state drive at home (desktop / laptop, etc), to become familiar with how SQL Server is installed, I recommend a download of the SQL 2012 Developer edition ISO and to familiarize oneself with the command line installation - and if we could compare total install times (it's ll in the logs, no need for the stop watch), versus a GUI install that would be cool :) to share.  
For those who are curious regarding command line installs, here are some installation examples. Note that now you can mount your ISO natively in Win 2012/Windows 8 and run this comment directly from the mounted drive letter. I have avoided the use of /QS below because I like to see the GUI install to validate the parameters for a second time, just to ensure the instance starts off on the right foot.
CMD line install and one for an non-clustered with Analysis Services:
setup.exe /ACTION="Install" /AGTSVCPASSWORD="19CharacterPassword" /ASSVCPASSWORD="19CharacterPassword" /SAPWD="19CharacterPassword" /SQLSVCPASSWORD="19CharacterPassword" /INDICATEPROGRESS="true" /ENU="True" /UpdateEnabled="TRUE" /UpdateSource="Drive:\FOLDERCONTAININGLatestUpdate/FEATURES=SQLENGINE,REPLICATION,FULLTEXT,DQ,AS,RS /HELP="False" /INDICATEPROGRESS="TRUE" /X86="False" /INSTALLSHAREDDIR="C:\Program Files\Microsoft SQL Server" /INSTALLSHAREDWOWDIR="C:\Program Files (x86)\Microsoft SQL Server" /INSTANCENAME="InstanceName" /INSTANCEID="InstanceName" /ERRORREPORTING="True" /INSTANCEDIR="C:\Program Files\Microsoft SQL Server" /AGTSVCACCOUNT="InstanceSpecificServiceAccountName" /ASSVCACCOUNT="InstanceSpecificServiceAccountName" /ASSVCSTARTUPTYPE="Automatic" /ASCOLLATION="Latin1_General_CI_AS" /ASDATADIR="DriveName:1\olapdb_InstanceName" /ASLOGDIR="DriveName:\olaplog_InstanceName" /ASBACKUPDIR="DriveName:\olapbakup_InstanceName" /ASTEMPDIR="DriveName:\\olaptmp_VInstanceName" /ASCONFIGDIR="DriveName:\OLAP\Config" /ASPROVIDERMSOLAP="1" /ASSYSADMINACCOUNTS="ListOfUsers" "ADDINSTANCESPECIFICCCOUNT" /ASSERVERMODE="MULTIDIMENSIONAL" /FILESTREAMLEVEL="0" /SQLCOLLATION="Latin1_General_CI_AS" /SQLSVCACCOUNT="InstanceSpecificServiceAccountName" /SQLSYSADMINACCOUNTS="InstanceSpecificServiceAccountName" "ADDINSTANCESPECIFICCCOUNT" /SECURITYMODE="SQL" /INSTALLSQLDATADIR="DriveName:\sqlsysdb_InstanceName" /SQLBACKUPDIR="DriveName:\sqlbakup_InstanceName" /SQLUSERDBDIR="DriveName:\mpdbs001\sqlappdb_InstanceName" /SQLUSERDBLOGDIR="DriveName:\sqlapplog_InstanceName" /SQLTEMPDBDIR="DriveName:\sqltmpdb_InstanceName" /RSSVCACCOUNT="NT Service\ReportServer$InstanceName" /RSSVCSTARTUPTYPE="Automatic" /FTSVCACCOUNT="NT Service\MSSQLFDLauncher$InstanceName"

And a command line instance Repair:
setup.exe /QS /ACTION="repair" /ENU="True" /INSTANCENAME="NAME" /ASSVCACCOUNT="19CharacterPassword" /ASSVCPASSWORD="19CharacterPassword" /SAPWD="19CharacterPassword" /SQLSVCPASSWORD="19CharacterPassword"

For the most part, the cluster installation is exactly the same as the standalone SQL Server installation with the exception of a few screens in the GUI, and I would not recommend a Failover Cluster installation from the CMD prompt since you miss all the steps of whether parameters are valid for installation - unless you run CMD without the /QS parameter, which means an attended installation launched from the command link. I find this is a faster way of feeding the GUI installation procedure, and validating as you go along that the parameters actually work within the installation procedure before clicking Next (or equivalent) on each step.

Adding a node, however, is straightforward unattended and a real time-saver, NB
 when you add a node, you must provide again the passwords for service accounts.

setup.exe /ACTION="AddNode" /AGTSVCPASSWORD="StrongPassword" /SQLSVCPASSWORD="StrongPassword" /INDICATEPROGRESS="true" /ENU="True" /UpdateEnabled="False" /UpdateSource="Drive:\FOLDERCONTAININGLatestUpdate" /HELP="False" /INDICATEPROGRESS="TRUE" /X86="False" /INSTANCENAME="InstanceName" /FAILOVERCLUSTERGROUP="ClusterRoleName" /FAILOVERCLUSTERIPADDRESSES="IPv4;;Public;" /FAILOVERCLUSTERNETWORKNAME="SQLVirtualClusterName" /CONFIRMIPDEPENDENCYCHANGE=1 /AGTSVCACCOUNT="domain\InstanceSpecificServiceAccount" /SQLSVCACCOUNT="domain\InstanceSpecificServiceAccount"

---this one is when you have to add AS also on the second node

Using a Configuration file to add a second node to a cluster:
setup.exe /qs /ACTION="AddNode" /CONFIGURATIONFILE=”DRIVEONOTHERNODE:\Program Files\Microsoft SQL Server\110\Setup Bootstrap\ConfigurationFileINSTANCENAME.ini” /AGTSVCPASSWORD=”15CharacterPassword" /ASSVCPASSWORD=”15CharacterPassword" /SQLSVCPASSWORD=”15CharacterPassword" /INDICATEPROGRESS="TRUE"

Changing Database Server Collation:, err, if you set it wrong by accident (works exclusively for standalone from my experience):
Setup /QS /ACTION=REBUILDDATABASE /INSTANCENAME="InstanceName" /INDICATEPROGRESS="TRUE" /SQLSYSADMINACCOUNTS="ML\oth_mlsqldbms" "listOfAccounts" "domain\userGroup" /SAPWD="StrongPassword" /SQLCOLLATION=SQL_Latin1_General_CP1_CI_AS

References (for all the other options):