Powered By:

Blog Stats

  • Blogs - 7
  • Posts - 959
  • Articles - 2
  • Comments - 14446
  • Trackbacks - 90

Bloggers (posts, last update)

Latest Posts

Using Exchange Web Services Managed API in PowerShell

I’ve been finding myself in the Exchange 2013 world for the last few months, helping with some administration and updates. As a result I stumbled across an unknown, yet cool (to me)  Exchange API. Naturally I couldn’t resist trying this out in PowerShell. For those who just want working script, here it is. This script will return the most recent 10 items.

# This requires the Exchange Web Services Managed API to be installed on the computer where this script is being ran

# Download at - http://www.microsoft.com/en-us/download/confirmation.aspx?id=42022

Add-Type -Path "C:\Program Files (x86)\Microsoft\Exchange\Web Services\2.1\Microsoft.Exchange.WebServices.dll"

#Connetion - https://msdn.microsoft.com/en-us/library/microsoft.exchange.webservices.data.exchangeversion%28v=exchg.80%29.aspx

$EmailAccount = "your email address"

#Change the Exchange Version to work with your environment

$EWS = New-Object Microsoft.Exchange.WebServices.Data.ExchangeService([Microsoft.Exchange.WebServices.Data.ExchangeVersion]::Exchange2013_SP1)

#Change the “UseDefaultCredentials” to false if you want to specify alternate creds

#$EWS.UseDefaultCredentials = $false


$inbox = [Microsoft.Exchange.WebServices.Data.Folder]::Bind($ews,[Microsoft.Exchange.WebServices.Data.WellKnownFolderName]::Inbox)

$mailitems = $inbox.FindItems(10)

$mailitems | ForEach {$_.Load()}

$mailitems | Select Sender,Subject,Body

For those of you who are still with me…a little history.

The Exchange Web Services (EWS) Managed API appears to have come onto the scene in early 2009 (Where was I?) with it’s Beta 1.0 release. Fast forward to today, it’s in its 2.2 release and is applicable to Exchange 2007 SP1 and UP including Office 365.  This API can be used to work with e-mail messages, calendar, task, and contact information…Translation it’s so much easier for a non-developer (like me!) to harness Exchange resources without dealing with the underlying SOAP interface of EWS. I mean how cool is it that Microsoft took the time to make a wrapper to get to the EWS.

The construction was super simple, and is executed like this.

1.) Create the client object which looks like this


2.) Set the URL by passing the “AutodiscoverURL()” method your email address. This will go out to your Exchange environment to get and fill the object with the correct “Exchange.asmx” web service.
This might be something like “https://mail.contoso.com/EWS/Exchange.asmx”

3.) Create a variable for the inbox folder

4.) Create a variable for mailitems filling it with a search from the inbox using the “FindItems()” method. Incidentally this is where you might decide to do other things like use the “MarkAllItemsAsRead()” method to take care of those pesky unread items should you desire.

5.) Loop through each of the mail items so they are loaded into PowerShell using the “Load()” method

6.) Display them selecting whatever fields you deem important. For the sake of this demonstration it “Sender,Subject,Body”


All this to say I really am wildly excited about being able to use this as an alternative method to send e-mail from PowerShell Scripts where I use the .net method. Not to mention demonstrating how powerful and friendly powershell really is. Past that I can think of application within SharePoint synchronizing a calendar and task items from a mailbox without a third party program. I’m not saying this is the end all be all by any means but it’s a nice new egg of putty I can make immediate use of to fill some gaps until a better solution can be designed and implemented.

It certainly made me think…”Hmm…This might work”  for this,that, and the other thing too! Smile

posted @ 3/18/2015 5:24 PM by Greg Tate

P2V Windows 10 and Parallels

I have been running Windows 10 since the preview was released. My experience, so far, would be positive. It has been very smooth transition. As someone using a machine without a touchscreen, I would say it is a lot more functional than Windows 8/8.1. I still find the full screen apps lacking, but the operating system is smoother, overall, to me, and I enjoy using it.

I own, and generally enjoy using, a MacBook Pro 15. My only complaint on the MacBook is the lack of physical Home, End, Insert, and Delete keys. I have run Windows of various versions on it in the past using Parallels to great success.

Windows 10 has been happily living on my PC laptop, but I wanted to consolidate it, if I could, onto the MacBook Pro, so I could carry one thing. The resolution on the MacBook Pro is much higher as well and makes some of my work much easier. I am typing this on my Windows 10 VM that was restored onto Parallels on the MacBook Pro, and it's been working for a few days like a champ. I thought I would document the steps I took to move from the Physical machine to this Virtual machine if anyone else needed, as it was not that complicated once I located the information I needed.

  1. Make a system image backup of Windows 10
  2. Restore the system image backup of Windows 10 onto Parallels VM

That is the high level, now I'm going to go through the steps I took to get there.

  1. Make a system image backup of Windows 10
    1. I grabbed a USB drive, and plugged it in where it was recognized as F:
    2. Ran the following command
      1. wbAdmin start backup -backupTarget:F: -include:C: -allCritical -quiet
    3. Note that you will need to replace F: with whatever drive you want to use.
  2. Restore the system image backup of Windows 10 onto Parallels VM
    1. Note: This was, for me a little trickier because my VM didn't seem to see my USB drive however I mounted it to the machine, so I used my NAS.
    2. Connected to a share on my NAS from my PC
    3. Copied the folder with the backup from my USB drive to the share.
    4. On Mac, Open Parallels, selected my WinX ISO
    5. Let Parallels do the entire install (I happened to be going somewhere for a while, so this just completed while I was gone. It's not really required)
    6. Held SHIFT and restart to get into the recovery console
    7. Went to repair, advanced, system image recovery (path may be a little different, but as long as you get to system image recovery, it's ok)
    8. Stepped through wizard and mapped drive to NAS share
    9. Picked backup and restored
    10. Restarts happened after that as needed.
    11. Once Windows was started, took a snapshot and began cleaning
    12. Installed Parallels tools and removed the driver software for my hardware I had installed
    13. Took another snapshot when all was stable and start working.

One thing to note is that the Parallels tools need quite a few reboots. So if you are rebooting over and over again manually, you may think something is wrong, but it is probably fine. I wasn't being super observant, but I think it was 4 or 5 reboots that all looked like they did the same thing when they started.

This was my first time using the system image recovery tool, and I have to say compared to Time Machine, it's a bit ridiculous and convoluted. It *did* work though once I found the commands and figured out what to do. By contrast, when I got my new mac, I just pointed it at my Time Machine location (which setup found automatically for me) and pushed a button. I imagine a OSX VM would work the same way. Apple! Why don't you have HOMEENDINSERTDELETE KEYS!!! Then you would be perfect!



For the page that I drew my inspiration from, look here:


posted @ 2/6/2015 6:44 PM by Roy Ashbrook

Simple RIAK and C# Example


Install It:

  1. Have a linux box somewhere. i spun up a new centos 7.0 droplet on digital ocean for 5 bucks and turned it off when i was done with it.
  2. http://docs.basho.com/riak/latest/ops/building/installing/rhel-centos/
    1. Basically, I logged into my new machine and ran:
    2. sudo yum install http://yum.basho.com/gpg/basho-release-5-1.noarch.rpm
    3. sudo yum install riak
  3. If you want this to be available outside of your machine, you need to modify the ip address riak is bound to.
    1. If you were logged in as root, go to the /etc/riak folder and edit the app.config
    2. Modify the "http" area to have your public ip address as well as the 'PB' area up top.
  4. Run "riak start" to start up riak
  5. Check http://<your ip>:8098/riak/status should return some json


Use it:

  1. Open LINQpad (or whatever you want)
  2. NuGet search for riak
  3. Install CorrugatedIron
  4. Create a app.config file that is something like this:
    1. <configuration>
          <section name="riakConfig" type="CorrugatedIron.Config.RiakClusterConfiguration, CorrugatedIron"/>
      <riakConfig nodePollTime="5000" defaultRetryWaitTime="200" defaultRetryCount="3">
              <node name="mynodename" hostAddress="actualnodeaddress" pbcPort="8087" restPort="8098" poolSize="0" />
    2. I think this file can be named anything really, I actually called it riak.config and included a path down below.
  5. try some c# code like this:
    1. void Main()
      //config file location
      var fl = @"path to my app.config";
      //name of the config section in the config file above
      var cs = "riakConfig";
      //name of our riak bucket to store things in
      var b = "mybucket";
      //connect to the cluster using our config file
      var clst = RiakCluster.FromConfig(cs,fl);
      //create a client
      var clnt = clst.CreateClient();
      //create an object to store
      var kvp = new KeyValuePair<string,string>("0","zero");
      //put it in a bucket in riak, key it, and drop the object in there
      var o = new RiakObject(b,kvp.Key, kvp);
      //put the object in riak
      //go get the object using the same key as above
      var r = clnt.Get(b,kvp.Key);
      //dump the value we got back

posted @ 9/10/2014 12:49 AM by Roy Ashbrook

Pulling a Sharepoint 2007 list into Excel as Raw XML

I recently had a need to mash-up some data from a SharePoint 2007 list in an Excel document I was working on. I already knew that I could work with SharePoint 2007 data in Excel by using the following instructions from Microsoft:

  1. Do one the following on a SharePoint site:
Windows SharePoint Services 3.0
  1. If your list is not already open, click its name on the Quick Launch. If the name of your list doesn't appear, click View All Site Content, and then click the name of your list.
  2. On the Actions menu Actions menu, click Export to Spreadsheet.
  3. If you are prompted to confirm the operation, click OK.
Windows SharePoint Services 2.0
  1. If your list is not already open, click Documents and Lists, and then click the name of your list.
  2. On the page that displays the list, under Actions, click Export to spreadsheet.
  1. In the File Download dialog box, click Open.
  2. If you are prompted whether to enable data connections on your computer, click Enable if you believe the connection to the data on the SharePoint site is safe to enable.
  3. Do one of the following:
  • If no workbook is open, Excel creates a new blank workbook and inserts the data as a table on a new worksheet.
  • If a workbook is open, do the following in the Import Data dialog box that appears:
    1. Under Select how you want to view this data in your workbook, click Table, PivotTable Report, orPivotChart and PivotTable Report.
    2. Under Where do you want to put the data, click Existing worksheet, New worksheet, or New workbook.

If you click Existing worksheet, click the cell where you want to place the upper-left corner of the list.

  1. Click OK.



That’s all very well, but in my case I had an existing spreadsheet with quite a few tables and an existing excel model. I didn’t want to go through these gyrations. Isn’t there some way to just get the data right off the list as a web page?

Yes. Enter the “Import XML data” feature in Excel. MS Offers the following following guidance for getting XML data from a web service:

Import XML data from a Web service

To do the following procedure, you must have access to a server that is running pnSTS11. A default installation of pnSTS11 provides a data retrieval service for connecting to data in SharePoint lists. A SharePoint site administrator can install the Microsoft Office Web Parts and Components to add additional data retrieval services for Microsoft SQL Server and Microsoft Business Solutions. The installation program for Microsoft Office Web Parts and Components is available from the Downloads on Microsoft Office Online.

  1. On the Data menu, point to Import External Data, and then click Import Data.
  2. Do one of the following:

ShowOpen an existing data source

ShowCreate and open a new data source connection

  1. Select one of the following options:
  • XML list in existing worksheet

The contents of the XML data file are imported into an XML list in the existing worksheet at the specified cell location.

  • XML list in new worksheet

The contents of the file are imported into an XML list in a new worksheet starting at cell A1.

  1. If the XML data file does not refer to a schema, then Excel will infer the schema from the XML data file.
  2. To control the behavior of XML data, such as data binding, format, and layout, click Properties which displays the XML Map properties dialog box. For example, existing data in a mapped range will be overwritten when you import data by default, but you can change this.



But how to get the URL to call for the XML data? The following link holds the key:


You have to replace {0} with your site URL and the {1} with your GUID for the list ID on SharePoint and {2} with the view GUID. How to get these? Well the easiest way (I think) is to go to your list, then select the view you want. Click view dropdown and select “Modify this view.” This will open the link to the screen where you can modify that view. The URL in the browser should have a View and List value. I stripped the %7B and %7D off as those are the { and } respectively. You don’t need to use a view and can omit that completely if you want to connect directly to the list. But I found I normally wanted to just get a certain view or I wanted to create a special view just for this activity.

Regardless, you can now follow the auto prompts to get the data into a spreadsheet. You can also open the developer tab and dive into the XML itself (see the MS link above for more instructions) and drag and drop the fields to other locations. Now when you get the data it will put it wherever you bound it.

By default it will dump it into a table and will include the schema items which you don’t need. You can open the Developer tab, select “Source” in the XML section and then just deselect the ns2 : Schema element. It will stop syncing those fields and you’ll have to clean up your table to get rid of them. I was able to rename the columns etc as needed, but since this is a read only feed I was typically using it to calculate other columns so I frequently stick with the ows_* columns that are the defaults for the data element.

Screenshot of the XML section from Developer tab in Excel:

XML Refresh Data


Once I figured out I needed to find the XML import stuff for Excel, I was able to find lots of articles on it online to help. The following was my favorite and links to a lot of other ones.


posted @ 7/11/2014 3:39 PM by Roy Ashbrook

New-SelfSignedCertificate and CERT Provider


Well the non-whimsical title aside I must say “hats off “ to those PowerShell gurus at Microsoft. You’ve made my live a bit easier.
This quick post is a look at the New-SelfSignedCertificate CMDLET and how the PowerShell Certificate (CERT) Provider.

I realize both are rather self explanatory, the first creating a self signed certificate where as the other providing directory
like interaction / access within PowerShell to the certificate stores. Essentially making the need to spin ye old mcc certificate
console a moot point.

Suppose you’re tasked with building a functional lab {insert Microsoft software title here} environment, naturally you want to
automate as much as possible yet those pesky certificates cause you to break open IIS to create a self signed certificates.
Sure, it’s only an extra manual step or two but my take, why do manual when automation isn’t but half the effort more?

That said what if the requirements are for a SharePoint 2013 LAB, with a functioning APP Model. The app model requirements bring
along with it the requirements for wildcard certificates.  Now I could be missing something but my testing within IIS8 didn’t allow
for specifying the FQDN (CN).

I guess at this point, one could consider a few options. First one might be inclined to stand up a lab PKI(or leverage an existing one)
Of course a more simple but costly route would be to use public certificates.

If time and money are constraints then our friendly neighborhood PowerShell cmdlet and CERT provider can quickly help us out.  
After all the New-SelfSignedCertificate will let us specify our DNS name or DNS names. Yes…That was plural of names, as in more
than one. And since we are talking DNS names, well then we only find ourselves limited by what is defined in DNS or the server
HOSTS file (none of us do that – right?)

So take this snippet and incorporate it into your automated lab builds or conversely offer your own opinion

#Issue A Self Signed Cert

New-SelfSignedCertificate -CertStoreLocation Cert:\LocalMachine\My -DnsName *.subdomain1.subdomain.domain.org, hostname.subdomain1.subdomain.subdomain.org

#Export Self Signed Cert To Temp Location

Get-ChildItem Cert:\LocalMachine\My | Where {$_.Subject -like "*subdomain.domain.org"} | Export-Certificate -Type CERT -FilePath E:\Temp\SelfSign.cert

#Import To TRUSTED ROOT AUTHORITY – This prevents browser Errors

Import-Certificate -FilePath E:\temp\SelfSign.cert -CertStoreLocation Cert:\LocalMachine\Root

#Clean Up Temp

Remove-Item -Path E:\Temp\SelfSign.cert

#Move Certificate From Personal To WebHosting

Get-ChildItem Cert:\LocalMachine\My | Where {$_.Subject -like "*arlpdevapps.arlp.org"} | Move-Item -Destination Cert:\LocalMachine\WebHosting


Hope this helps someone.

posted @ 6/30/2014 4:49 PM by Greg Tate

Generating a Date Dimension table in C#

I use a table similar to this quite frequently for my own personal reporting and analytics on various things. I figured I would polish it slightly and publish it in case anyone else needed it. =)

posted @ 6/30/2014 11:05 AM by Roy Ashbrook

List Fields/Formfields data in a Word Document using C# and Microsoft.Office.Interop.Word

I have a set of documents I need to review regularly. They are *mostly* form data in word so I wanted to write a simple script to extract the data I need from all of the documents and put it into a table. I wasn't sure how to do this so I wrote a little script to iterate over the data in the document. The documents I review have several versions, different form fields, and other irregularities. I wanted to save a little script I wrote that uses various methods to iterate through the document. I use LINQPad regularly, so I wrote this in there in C# as a method for opening and handling the word document, and an extension for iterating it.




posted @ 6/25/2014 9:28 AM by Roy Ashbrook

Find Recent Items in Windows 8

  1. Hit Windows + R (Opens "Run" command)
  2. Type "Recent"
  3. Hit Enter!

Google search yields a ton of sources for this. I'm just going to reference the first one that I looked at:



posted @ 6/4/2014 3:05 PM by Roy Ashbrook

Amazon Workspaces – Usage Notes


  • Reasonably Priced
  • Easy to setup
  • Easy to add other users
  • Easy to reset to zero and retain data
  • Expensive plan has good response


  • No usage based pricing
  • Requires usage of the Amazon Workspaces Client
  • Can't save password, makes complex passwords a pain
  • Can't move to different workspace easily
  • Cheap plan not really fast enough for me.


I think the standard plan would be great for most people. Reasonably priced if you need office or not. I think using the MS Office 365 would be a better deal for most, but it's certainly nice to get turnkey with no work on office and anti-virus. I'm surprised MS doesn't offer something like this yet really. I wish there was a save password option for the client to make use of complex passwords easier, but this could be solved by a password reset policy I suppose.

My Profile:

My computer time is split between Communication and Content Creation about 70/30. Communication is via the usual Email, Chat, and Screen Sharing. Most content creation is in MS Office or some type of coding application. The most important thing for me when utilizing a VDI platform is that it is fairly responsive when I'm working in multiple applications and need to alt-tab back and forth. VDI doesn't exactly shine in this capacity when you are working remotely, but I feel like I can get a good sense of a worst case scenario for typical users when I am testing.


Good, but I am not going to replace my standard solution for this (local use of Parallels on my MBPro), yet.

Pricing, Setup, Experience, Recovery, Performance

posted @ 6/4/2014 1:18 PM by Roy Ashbrook

AD User Account Creation–Script

I’m throwing this script out there for any developers or admins who are seeking a quick SharePoint focused script for user account creation.

In my environment we’re working towards automated unattended install of SharePoint, including account creation. AutoSPInstaller is cool
but it seems too complicated for me. Put another way, if I’m going to spend time learning I’m choosing to learn PowerShell and SharePoint
in more detail.

With that said, here is my script. This script could easily import from a CSV (or other file), read a SharePoint List, or any other means of
input. For the purpose of an example, an array is used.


#Make Use Of An Array Just for example. This could easily be a csv but since this was for dev it was easier

#In Case of CSV Column Order would be "SamAccountNAme,FName,LName,Description,Password

#$UserList = Import-CSV -Path <csv path here>

$UserList = @(





#Loop Through Each Nested Array

ForEach ($User in $UserList) {

$SamAccountName = $User[0]#(Read-Host -Prompt "Please Enter SamAccountName")

#Check To See If SamAccountName Already Exists, If It Doesnt Create It

If (!(Get-ADUser -Filter {SamAccountNAme -eq $SamAccountName})){

    $OUPath = "OU=<OUName>,OU=<OUName>,DC=<DomainRoot>,DC=<dot suffix>"

    $DomainSuffix = "@<domain.org>"

    $FName = $User[1]

    $LName = $User[2]

    $Description = $User[3]

    $PassWord = ConvertTo-SecureString ($User[4]) -AsPlainText -Force #(Read-Host -Prompt "Enter Account Password" -AsSecureString)

    New-ADUser -Name ($FName+" "+$LName) -SamAccountName $SamAccountName -GivenName $FName -Surname $LName -DisplayName ($FName+" "+$LName) -Path $OUPath -UserPrincipalName ($SamAccountName+$domainsuffix) -Description $Description -AccountPassword $PassWord  -PasswordNeverExpires:$true -Enabled:$true




    Write-Host "$SamAccountName already exists within AD. It will not be created"




posted @ 5/29/2014 4:18 PM by Greg Tate

Disable LoopBack Script - SharePoint

A friend of mine and I were discussing the topic of Disabling LoopBack and which is the better route to go when it comes to creating a new SharePoint 2013 farm.

My perspective, when possible, abide by Microsoft’s recommendations. In this case Microsoft seems to take the classic “it depends” stance on disabling ye old loopback.

Being me, I asked my self why not have both? After all I’m sure PowerShell could help out here. To those who know me or have read this blog; I’m not a developer and

it probably shows in various scripts. That said, the following script does accomplish the desired result of one script allowing you to choose if you are going to take the

“Developer” route and disable the loopback, or take the “Admin” route and call out your exceptions. If you are a new SharePoint Admin this might be useful, if you arent

new to SharePoint then I’m sure you’ve already clicked off of this post a few lines back…lol

#Disable LoopBack Or Enter Back Connection Name

#Use this to avoid disabling loopback - http://support.microsoft.com/kb/896861

#Define in CSV format your FDQNs

$DisableLoopBack = $null

Do {$DisableLoopBack = (Read-Host "Would you like to disable loopback? (Yes / No)") }

Until ($DisableLoopBack -eq "Yes" -or $DisableLoopBack -eq "No")

If ($DisableLoopBack -eq "Yes"){

Write-Warning "According To Microsoft You Should NOT disable loopback, HOWEVER it's a common development practice."

New-ItemProperty HKLM:\System\CurrentControlSet\Control\Lsa -Name "DisableLoopbackCheck" -value "1" -PropertyType dword -Force | Out-Null

IISReset /noforce


Elseif ($DisableLoopBack -eq "No"){

$HostNames = @()


$More = $null

Do {

Write-Host "Here are the host names that will be added to the BackConnectionHostNames Exception List"

Write-Host $HostNames -ForegroundColor Green

$More = (Read-Host "Would you like to add others? (Yes / No)")

If ($More -eq "Yes"){

$AddHost = (Read-Host "Enter FQDN Host Name")


Write-Host "$AddHost has been added to the list of names above" -ForegroundColor Green

$More = (Read-Host "Would you like to add others? (Yes / No)")



Until ($More -eq "No")

New-ItemProperty HKLM:\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0 -Name "BackConnectionHostNames" -Value $HostNames -PropertyType MultiString

IISReset /noforce


posted @ 5/29/2014 4:05 PM by Greg Tate

Uploading Files to SharePoint using PowerShell

Today I had to fall in line and do something I’m not entirely proud of. I had to create a script to replicate files from a fileshare to Sharepoint. Struggling to find value in the efforts, I figured a blog post to remind me of this event was fitting. The script was rather basic, something good, and I took the opportunity to grow, exploring the world of Powershell – SharePoint interaction outside of the SharePoint PS SnapIn.

Surprisingly, it was a bit easier than I had imagined. While I wish I could take 100% of the credit, inspiration for this function comes from this article

Function Upload-SPFile {

Param (

#Local or Network Path


#SharePoint URL including folder




$UploadPath = $SPURL + $(split-path -Leaf $UncPath)

$WebClient = New-Object System.Net.WebClient

$WebClient.Credentials = [System.Net.CredentialCache]::DefaultCredentials



So that’s the basic of uploading a file to SharePoint using the .net webclient…to many this is old hat…To me something new to start out the new year.

posted @ 1/2/2014 3:44 PM by Greg Tate

Using C# to interface with SQLite

If you need to interface with SQLite there are a couple of steps.

1) You need to get the .net provider for SQLite from sourceforge.net
2) Then add a reference to System.Data.SQLite to your project.
3) You need to make sure the reference is marked to be copied locally.

Here is a C# class to for doing select, insert, update, and delete.

 public class SqlLiteHelper
        String dbConnection;

        /// <summary>
        ///     Single Param Constructor for specifying the DB file.
        /// </summary>
        /// <param name="inputFile">The File containing the DB</param>
        public SqlLiteHelper(String inputFile)
            dbConnection = String.Format("Data Source={0}", inputFile);

        /// <summary>
        ///     Single Param Constructor for specifying advanced connection options.
        /// </summary>
        /// <param name="connectionOpts">A dictionary containing all desired options and their values</param>
        public SqlLiteHelper(Dictionary<String, String> connectionOpts)
            String str = "";
            foreach (KeyValuePair<String, String> row in connectionOpts)
                str += String.Format("{0}={1}; ", row.Key, row.Value);
            str = str.Trim().Substring(0, str.Length - 1);
            dbConnection = str;

        /// <summary>
        ///     Allows the programmer to run a query against the Database.
        /// </summary>
        /// <param name="sql">The SQL to run</param>
        /// <returns>A DataTable containing the result set.</returns>
        public DataTable GetDataTable(string sql)
            DataTable dt = new DataTable();
                using (SQLiteConnection cnn = new SQLiteConnection(dbConnection))
                    using (SQLiteCommand mycommand = new SQLiteCommand(cnn))
                        mycommand.CommandText = sql;
                        using (SQLiteDataReader reader = mycommand.ExecuteReader())
            catch (Exception e)
                throw new Exception(e.Message);
            return dt;

        /// <summary>
        ///     Allows the programmer to interact with the database for purposes other than a query.
        /// </summary>
        /// <param name="sql">The SQL to be run.</param>
        /// <returns>An Integer containing the number of rows updated.</returns>
        public int ExecuteNonQuery(string sql)
            int rowsUpdated = 0;

            using (SQLiteConnection cnn = new SQLiteConnection(dbConnection))
                using (SQLiteCommand mycommand = new SQLiteCommand(cnn))
                    mycommand.CommandText = sql;
                    rowsUpdated = mycommand.ExecuteNonQuery();
            return rowsUpdated;

        /// <summary>
        ///     Allows the programmer to retrieve single items from the DB.
        /// </summary>
        /// <param name="sql">The query to run.</param>
        /// <returns>A string.</returns>
        public string ExecuteScalar(string sql)
            using (SQLiteConnection cnn = new SQLiteConnection(dbConnection))
                using (SQLiteCommand mycommand = new SQLiteCommand(cnn))
                    mycommand.CommandText = sql;
                    object value = mycommand.ExecuteScalar();
                    if (value != null)
                        return value.ToString();
            return "";

        /// <summary>
        ///     Allows the programmer to easily update rows in the DB.
        /// </summary>
        /// <param name="tableName">The table to update.</param>
        /// <param name="data">A dictionary containing Column names and their new values.</param>
        /// <param name="where">The where clause for the update statement.</param>
        /// <returns>A boolean true or false to signify success or failure.</returns>
        public bool Update(String tableName, Dictionary<String, String> data, String where)
            String vals = "";
            Boolean returnCode = true;
            if (data.Count >= 1)
                foreach (KeyValuePair<String, String> val in data)
                    vals += String.Format(" {0} = '{1}',", val.Key.ToString(), val.Value.ToString());
                vals = vals.Substring(0, vals.Length - 1);
                this.ExecuteNonQuery(String.Format("update {0} set {1} where {2};", tableName, vals, where));
                returnCode = false;
            return returnCode;

        /// <summary>
        ///     Allows the programmer to easily delete rows from the DB.
        /// </summary>
        /// <param name="tableName">The table from which to delete.</param>
        /// <param name="where">The where clause for the delete.</param>
        /// <returns>A boolean true or false to signify success or failure.</returns>
        public bool Delete(String tableName, String where)
            Boolean returnCode = true;
                this.ExecuteNonQuery(String.Format("delete from {0} where {1};", tableName, where));
            catch (Exception ex)
                returnCode = false;
            return returnCode;

        /// <summary>
        ///     Allows the programmer to easily insert into the DB
        /// </summary>
        /// <param name="tableName">The table into which we insert the data.</param>
        /// <param name="data">A dictionary containing the column names and data for the insert.</param>
        /// <returns>A boolean true or false to signify success or failure.</returns>
        public bool Insert(String tableName, Dictionary<String, String> data)
            String columns = "";
            String values = "";
            Boolean returnCode = true;
            foreach (KeyValuePair<String, String> val in data)
                columns += String.Format(" {0},", val.Key.ToString());
                values += String.Format(" '{0}',", val.Value);
            columns = columns.Substring(0, columns.Length - 1);
            values = values.Substring(0, values.Length - 1);
                this.ExecuteNonQuery(String.Format("insert into {0}({1}) values({2});", tableName, columns, values));
            catch (Exception ex)
                returnCode = false;
            return returnCode;

        /// <summary>
        ///     Allows the programmer to easily delete all data from the DB.
        /// </summary>
        /// <returns>A boolean true or false to signify success or failure.</returns>
        public bool ClearDB()
            DataTable tables;
                tables = this.GetDataTable("select NAME from SQLITE_MASTER where type='table' order by NAME;");
                foreach (DataRow table in tables.Rows)
                return true;
                return false;

        /// <summary>
        ///     Allows the user to easily clear all data from a specific table.
        /// </summary>
        /// <param name="table">The name of the table to clear.</param>
        /// <returns>A boolean true or false to signify success or failure.</returns>
        public bool ClearTable(String table)

                this.ExecuteNonQuery(String.Format("delete from {0};", table));
                return true;
                return false;

posted @ 11/15/2013 10:44 AM by Chris Barba

Using PowerShell to Check AD Schema

Here we are, a cold crisp 20 degree Wednesday in November. I thought to myself…this is not cool (no pun), but you know what is cool?  Yeah, I’m sure you guessed PowerShell’s ActiveDirectory module.

Just a quick blog note to show how PowerShell quickly settled a dispute during an upgrade of our AD schema to handle a Windows 2012 DC. Of course this wasn’t a big dispute, many other tools could have been used. The question was had the Schema already been changed to support a 2012 Server. Again, there are many tools that could provide the answer, but what made this so cool was being able to share the experience with others who didn’t know PowerShell could replace some of the old stand by AD tools. So this is more of an AH-HA moment that felt right to share (and the script)…All brought to us by PowerShell and the ActiveDirectory module.

(An academic honesty note here…this script is not 100% my own work...More like 5% – 10% my work, I can’t remember where I snagged the meat of this script so the credit remains unknown.)

#This script will query AD for the Schema Version of AD,Exchange and Lync. Can be ran as least privilaged user.

Import-Module ActiveDirectory


$SchemaVersions = @()

#AD Portion

$SchemaHashAD = @{

13="Windows 2000 Server";

30="Windows Server 2003";

31="Windows Server 2003 R2";

44="Windows Server 2008";

47="Windows Server 2008 R2";

56="Windows Server 2012"


$SchemaPartition = (Get-ADRootDSE).NamingContexts | Where-Object {$_ -like "*Schema*"}

$SchemaVersionAD = (Get-ADObject $SchemaPartition -Property *).objectVersion

$AdSchema = New-Object System.Object

$AdSchema | Add-Member -Type NoteProperty -Name Schema -Value $SchemaVersionAD

$AdSchema | Add-Member -Type NoteProperty -Name Product -Value "AD"

$AdSchema | Add-Member -Type NoteProperty -Name Version -Value $SchemaHashAD.Item($SchemaVersionAD)

$SchemaVersions += $AdSchema

#Exchange Portion

$SchemaHashExchange = @{

4397="Exchange Server 2000 RTM";

4406="Exchange Server 2000 SP3";

6870="Exchange Server 2003 RTM";

6936="Exchange Server 2003 SP3";

10628="Exchange Server 2007 RTM";

10637="Exchange Server 2007 RTM";

11116="Exchange 2007 SP1";

14622="Exchange 2007 SP2 or Exchange 2010 RTM";

14726="Exchange 2010 SP1";

14732="Exchange 2010 SP2";

15137="Exchange 2013"


$SchemaPathExchange = "CN=ms-Exch-Schema-Version-Pt,$SchemaPartition"

If (Test-Path "AD:$SchemaPathExchange") {

$SchemaVersionExchange = (Get-ADObject $SchemaPathExchange -Property rangeUpper).rangeUpper


Else {

$ExchangeErr = 1


$ExchSchema = New-Object System.Object

$ExchSchema | Add-Member -Type NoteProperty -Name Schema -Value $SchemaVersionExchange

$ExchSchema | Add-Member -Type NoteProperty -Name Product -Value "Exchange"

$ExchSchema | Add-Member -Type NoteProperty -Name Version -Value $SchemaHashExchange.Item($SchemaVersionExchange)

If ($ExchSchema.Schema -ne 0) {

$SchemaVersions += $ExchSchema


#Lync Portion

$SchemaHashLync = @{

1006="LCS 2005";

1007="OCS 2007 R1";

1008="OCS 2007 R2";

1100="Lync Server 2010";

1150="Lync Server 2013"


$SchemaPathLync = "CN=ms-RTC-SIP-SchemaVersion,$SchemaPartition"

If (Test-Path "AD:$SchemaPathLync") {

$SchemaVersionLync = (Get-ADObject $SchemaPathLync -Property rangeUpper).rangeUpper


Else {

$LyncErr = 1


$LyncSchema = New-Object System.Object

$LyncSchema | Add-Member -Type NoteProperty -Name Schema -Value $SchemaVersionLync

$LyncSchema | Add-Member -Type NoteProperty -Name Product -Value "Lync"

$LyncSchema | Add-Member -Type NoteProperty -Name Version -Value $SchemaHashLync.Item($SchemaVersionLync)

If ($LyncSchema.Schema -ne 0){

$SchemaVersions += $LyncSchema


#OutPut Section

Write-Host "Known current schema version of products:"

$SchemaVersions | Format-Table * -AutoSize

#I think this error handling is probably better off in the setting of the note property but this takes care of it for now

If ($LyncErr -eq 1){

Write-Host "Lync or OCS not present" -ForegroundColor Yellow


If ($ExchangeErr -eq 1){

Write-Host "Exchange not present" -ForegroundColor Yellow



So there you have it, another way PowerShell rocks.

posted @ 11/13/2013 10:22 AM by Greg Tate

Hiding Disabled Users From Exchange Address Book

The other day while reviewing an Exchange 2010 Environment, I noticed a few active mailboxes belonging to disabled users. For obvious reasons this isn’t a good thing, if for nothing else it clutters up the Exchange Address Book.

Next thought in my mind…So what’s the best way to hide these disabled users? Having the PowerShell bias that I do in fact have, I had to spend 15 minutes reviewing the options.

  1. Use a manual process. This would include disabling the user in AD, followed up with the steps described here.
  2. Use Exchange Address Book Policies(ABP). As indicated in this article, APB’s have a dependency on Exchange 2010 SP2. That said it seems like a viable and interesting approach.
  3. Use PowerShell. As I started from the outset, I’m biased right now…So a PowerShell only approach seems “more better”.

Here is the script I used in a resource / user environment. Keep in mind this is a down and dirty version, a proof of concept. I would limit the use of this example as an inspiration only. (good or bad)

#This script will query for all LinkedMailboxes when ran on an Exchange Server

#It will return a user set who show their Linked Master Accounts as disabled

#Use the results with "Set-Mailbox -HiddenFromAddressListsEnabled $true" to change

#all of the disabled users to hidden from the address book. Example Below

add-pssnapin Microsoft.Exchange.Management.PowerShell.E2010 -ErrorAction Continue

Import-Module ActiveDirectory

$linkmbx = get-mailbox -RecipientTypeDetails LinkedMailbox

$alcusers = Get-Aduser -Filter * -Server <your domain here> -Properties Enabled

$userrpt = @()

foreach ($mbx in $linkmbx){

$name = $mbx.linkedmasteraccount

$user = $name.split("\")

$alcuser = $alcusers | where {$_.samaccountname -eq $user[1]}

if ($alcuser.Enabled -eq $false){

$rpt = New-Object System.Object

$rpt | Add-Member -MemberType NoteProperty -Name Name -Value $alcuser.Name

$rpt | Add-Member -MemberType NoteProperty -Name Alias -Value $mbx.alias

$rpt | Add-Member -MemberType NoteProperty -Name HidFromAddBook -Value $mbx.HiddenFromAddressListsEnabled

$userrpt += $rpt




Write-Host "There are" $userrpt.count "linked mailboxes with disabled user accounts in user domain"


#Uncomment this section if you want to include changing the address book visability

Foreach ($user in $usrrpt){

Write-Host "Changing address book visability for" $user.alias

Set-Mailbox -Identity $user.alias -HiddenFromAddressListsEnabled $true



Of course the next thought of automation comes to mind…but that’s a different post. 

posted @ 11/12/2013 3:15 PM by Greg Tate

How to get a list of all identity columns in a database

Here is some code to get a list of identity columns in database.

select  so.name as TableName,  + o.list as IdentityColumnName 
from    sysobjects so
cross apply
           case when exists ( 
        select id from syscolumns
        where object_name(id)=so.name
        and name=column_name
        and columnproperty(id,name,'IsIdentity') = 1 
        ) then
        end + ' '
     from information_schema.columns where table_name = so.name
    ) o (list)
where   xtype = 'U'
AND o.list is not null
AND name    NOT IN ('dtproperties')

posted @ 11/11/2013 4:14 PM by Chris Barba

Check if a database exists on a server

Here is some code to use to check if a database exists.
Just replace the string 'DATABASE NAME’.

SELECT * FROM [master].[sys].[databases] WHERE name='DATABASE NAME'

posted @ 11/11/2013 4:11 PM by Chris Barba

How to switch Entity Framework database connected to

If you have a connection to a database through entity framework and you need to switch it to another database (with the exact same structure) you just need to set the Connection.ConnectionString (as seen below).
I had an application where we created a copy of the Master database when setting up a new client.  So using Entity Framework I switched from the master database to the client (depending on what the admin was doing).

using (MasterEntities aEntities = new MasterEntities())

//Switch db connected to

aEntities.Database.Connection.ConnectionString = aEntities.Database.Connection.ConnectionString.Replace("OldDatabaseName", "NewDatabaseName"); //Some Query


posted @ 11/11/2013 4:01 PM by Chris Barba

Create SharePoint 2013 Result Source with PowerShell

In my continued automation efforts, I was looking to convert documentation provided from a consultant into something more…well…automated. In this first of four parts I’ll list out creating a Result Source with Powershell.

Subsequent posts (part 2 – 4) will give example of creating Result Types, Query Rules, and Search Navigation. The aim of this effort is to make use of PowerShell in rebuilding, essentially cloning without data, a search service application. This is useful when Microsoft support gives the classic solution of “rebuild” your service application. Doh!

It should be noted:

  • This will create the Result Source at the Site Collection level.
  • This isn't 100% my original work, it’s inspired (taken mostly and modified) from the SearchGuys blog post.
    • The blog had Bing and Federation as an example, this example is a local SharePoint Result Set to Query BCS


Add-PSSnapin Microsoft.SharePoint.PowerShell

#Change These Variables To Fit

$SPWeb = "Your SP Site Collection Here"

$resultSourceName = "Your Content Source Friendly Name Here"

$resultSourceDescription = "Description for (BCS) Data Source"

$qT = '{searchTerms?} (ContentSource="<Content Source Friendly Name Here>" IsContainer=false)'

#Begin The Process

$ssa = Get-SPEnterpriseSearchServiceApplication

$fedman = New-Object Microsoft.Office.Server.Search.Administration.Query.FederationManager($ssa)

$searchOwner = Get-SPEnterpriseSearchOwner -SPWeb $SPWeb -Level SPSite

$resultSource = $fedman.GetSourceByName($resultSourceName, $searchOwner)

#Check To See if it exists


Write-Host "Result source does not exist. Creating."

$resultSource = $fedman.CreateSource($searchOwner)


else { Write-Host "Using existing result source." }

#Finish It Up

$resultSource.Name =$resultSourceName

$resultSource.ProviderId = $fedman.ListProviders()['Local SharePoint Provider'].Id

$resultSource.Description = $resultSourceDescription



posted @ 11/7/2013 7:55 PM by Greg Tate

Working with SharePoint Web Parts using PowerShell

First things first. I’m not a developer. I seem to do ok working my way through the SharePoint object model with PowerShell and writing automation scripts, but that doesn’t make me a developer.

With that out the way, I do find myself in an odd place with developers neglecting (for whatever reason) to automate population of web parts in the content pages their solution has deployed. An example might be Search Content web part needing to have the proper display template selected for displaying conversations. Ah…My Dev friends (don’t hate me) but why not go that extra mile?  If I can do it through script, surely you can employ your superior coding skills to include it in the solution (wsp)!

For those of you who may find yourself in my shoes, here is a script I created to help ease that cross farm (environment pain). Essentially the script is a function with a few parameters. This is the core, from here you can customize to your needs. It’s a great starting point for anyone who wants to automate changes to web parts via PowerShell.  You can copy and paste the script below into PowerShell ISE and save it to whatever name you like


Add-PSSnapin Microsoft.SharePoint.Powershell

#----Start of Function-------

Function Set-WebParts {




[String]$SiteUrl = $(Read-Host "Please Enter Site URL"),


[String]$PageUrl = $(Read-Host "Please Enter Page URL")


$web = Get-SPWeb $SiteUrl

#+Get and Checkout Page

#+-Get Page

$page = $web.GetFile($pageURL)

#+-CheckOut The Page


#+-Load Limited Web Part Manager

$wpm = $web.GetLimitedWebPartManager($pageURL, [System.Web.UI.WebControls.WebParts.PersonalizationScope]::Shared)

#+Change Conversations WebPart - In my example I had some webparts titled "Conversations" and some Titled "Content Search"

$wp = $wpm.WebParts | Where {$_.Title -eq "Conversations" -or $_.Title -eq "Content Search"}

#+-This is the base url for the template - Change this to whatever meets the need

$base = "~sitecollection/_catalogs/masterpage/Display Templates/Content Web Parts/"

#+-This is the template name

$template = "Item_Discussion.js"

$NewItemTemplateId = $base+$template

#+-Actually Setting The Part

$wp.ItemTemplateId = $NewItemTemplateId


#+CheckIn and Publish Page

$page.CheckIn("Scripted Change")

$page.Publish("Scripted Publish")



#----End Of Function----


As I said, I’m not a developer so while functional, there is probably a better way to accomplish what I’ve published here.

posted @ 11/7/2013 5:31 PM by Greg Tate