— Administrative Steel —
The argument for PowerShell

There are too many people in this world that are unwilling to go the extra mile. Don’t be one of them. Microsoft has built time-proven GUI-based tools to work with active directory, task scheduling, SQL and other technologies, but PowerShell opens up these technologies and permits a great deal of flexibility while managing Microsoft applications.

Let’s take a look at an introductory statement made in a Microsoft technet article titled, “Scripting With Windows PowerShell” (found here ):

“Windows PowerShell is a task-based command-line shell and scripting language designed especially for system administration. Built on the .NET Framework, Windows PowerShell helps IT professionals and power users control and automate the administration of the Windows operating system and applications that run on Windows.”

Time to break this down:

“Windows PowerShell is a task-based command-line shell…”

Yes, PowerShell is task-based. While this terminology may have no specific interpretation, let’s just say PowerShell was built to get work done. The evidence is in cmdlets (pronounced: command-lets). Cmdlets are the programs you run as commands in a Powershell script. These programs are issued at the console and are always in the form “Do THIS to THAT”. Simple, readable and efficient, these commands syntactically read <Verb>-<Noun>. Here are some examples:

Clear-History
Export-CSV
Move-Item

There are many cmdlets that are installed with every Windows instance. There are many others built by Microsoft and third parties that can be installed manually. You can even create and install your own. These cmdlets can also have aliases assigned to them. For example, ls is an alias to Get-ChildItem to give a directory listing. In short, PowerShell is built for productivity and has a relatively low learning curve to get you started.
Got it? Let’s move on.

“Built on the .NET Framework…”

The power behind PowerShell, it’s ability to be applied to a wide range of scripting scenarios, lies in .NET. This is huge.

As a brief explanation, .NET is Microsoft’s development framework comprised of a large library (Framework Class Library or FCL) and a Common Language Runtime (CLR). The CLR enables multiple languages access to the FCL. There is no need to rewrite class libraries for different .Net languages. When PowerShell was introduced, it had been built upon the foundation of .NET just as Visual C# or Visual Basic .Net before it. This means that it had access to Microsoft’s core development library from the very beginning! This grants PowerShell unmatched flexibility as a management tool.

If you are truly a programming novice you may not be following. If this is the case I encourage you to pick up on the basics. Try Microsoft’s C# Fundamentals for Absolute Beginners. In order to truly appreciate PowerShell’s place as a .Net technology you will need to understand a bit about .NET and object oriented programming. You don’t need to become a developer, but I have seen C# developers mold and bend PowerShell in ways that have really impressed me. I want to set you up to learn how your tools accomplish your work. Doing so will up your administrative game and get you the professional brawn you’ve been looking for. I have always believed that if you want to become a great SysAdmin or Engineer, you must understand basic development and programming concepts.

“Windows PowerShell helps IT professionals and power users control and automate…”

Why learn PowerShell? I can boil it down to one word: automation. PowerShell will empower you by granting the ability to perform tasks quickly and efficiently. It only takes some additional effort up front to create tools that will save time, money and your sanity. Creating your own tools to solve the problems you face on a daily basis will boost your confidence and build your portfolio. You can also share, perfect and use these tools in the future. PowerShell may be relatively new, but it was desperately needed, and it is very powerful. Learn it, love it, and become more than the average Windows admin. Yes, you’re working yourself out of a job …and into a better one. In the end your manager –and your career– will thank you.

If your on the fence about PowerShell, get off and become a power user. Dig in and have fun!
If you’ve built scripts in Linux environments check out PowerShell Objects Part 1: No More Parsing! to see how PowerShell differs from the languages you may be accustomed to.

PowerShell Objects Part 2: Do It Yourself

In part one of our discussion concerning PowerShell objects we discussed the integration of .NET into PowerShell, how this has brought us to the object-oriented paradigm that it utilizes and how to work with these objects. Now, as an extension of that, we will be using objects ourselves when we are scripting. This is when it gets fun.

Let’s suppose I have an XML file that contains weightlifting data on a number of collegiate athletes. As a simple example we’ll use the following format (There may be any number of <Athlete>s):

<Athlete1RMData>
    <Athlete LastName ="">
            <BenchPress></BenchPress>
            <MilitaryPress></MilitaryPress>
            <Squat></Squat>
    </Athlete>
    <Athlete LastName ="">
            <BenchPress></BenchPress>
            <MilitaryPress></MilitaryPress>
            <Squat></Squat>
    </Athlete>
</Athlete1RMData>

The first thing to be aware of is that PowerShell is perfectly capable of importing this XML file and will parse it into an object for you to manipulate. This can also be done with CSV files.

$AthleteData = [[xml]](Get-Content C:\Path\to\XML\file.xml)

$AthleteData is now a .Net object (System.Xml.XmlNode) with properties and sub-properties. For example, I can return the 1-Rep Max values for Bench Press for all athletes by:

$AthleteData.Athlete1rmData.Athlete.BenchPress

Having all of this data in a single object isn’t very convenient for future processing, so let’s create some objects that will be easier to work with. I will create an arraylist of athletes, each of which will have a last name and all three 1RM stats associated with him. We do this by creating an arraylist object, iterating through the XML object, and creating objects for individual athletes. Then we add attributes:

$Roster = New-Object System.Collections.Arraylist
foreach ($Athlete in $AthleteData.Athlete1RMData.Athlete)
{
  #Creates a new object that represents an athlete
  $AthObject = New-Object System.Object    

  #Adds properties to the object
  $AthObject | Add-Member -type NoteProperty -name LastName -value $Athlete.LastName
  $AthObject | Add-Member -type NoteProperty -name 1RMBench -value $Athlete.BenchPress
  $AthObject | Add-Member -type NoteProperty -name 1RMMilitary -value $Athlete.MilitaryPress
  $AthObject | Add-Member -type NoteProperty -name 1RMSquat -value $Athlete.Squat

  #Adds the object to our list
  $Roster.Add($AthObject)
}

Look at what we’ve done. Once we create a list, we iterate through all of the athletes represented in the XML file using a foreach loop. Then, for each athlete we create an object representing an individual. Finally we begin adding values that we extract from the XML as properties into our new athlete object. Each of these properties has a name associated with it which we can use to make reference to its corresponding value later on. Visually this is what we’ve created:
ObjectsPart2
Now that we have our athletes in the form of .Net objects we can more easily do work with them. We can add additional athlete information, make comparisons, find averages, format/export data and a whole host of useful operations.

PowerShell Objects Part 1: No More Parsing!

One of the first things to know about PowerShell in order to wield it effectively is its object-based paradigm. If you are trying to parse strings in PowerShell scripts, it’s likely that there is a better way. (Disclaimer: There are indeed some scenarios where string parsing is necessary, but PowerShell has the muscle to handle that too.)

You see, PowerShell wants you to build, destroy, display, manipulate and break-down objects, and it will do everything in its power to help you get your work done if you comply. If, however, you insist on consistently utilizing the parsing techniques often applied in Unix environments, you will become very frustrated very quickly. Let’s look at a simple example:

Windows Management Instrumentation (WMI) objects are full of valuable system information that is easily accessible using PowerShell. Let’s say I want to display processor information. I can use:

Get-WmiObject Win32_Processor

I get the following output:

Caption           : Intel64 Family 6 Model 69 Stepping 1
DeviceID          : CPU0
Manufacturer      : GenuineIntel
MaxClockSpeed     : 2401
Name              : Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz
SocketDesignation : SOCKET 0

This is not a string, a table or an array. The output presented here is properties of the Win32_Processor WMI object. Since PowerShell is a .NET technology, the scripting language utilizes the availability of .NET framework classes in its operations. This integration is part of what makes PowerShell such a powerful tool. Since there already exists a wide array of methods and procedures in .NET that are specific to various object types, it is to the engineers advantage to utilize this functionality in scripting. That said, be familiar with and understand the capabilities of these objects, it’s a great step toward mastering Windows management. Now, off my soap box…

What we are looking at in the example above is a .NET object with various members. This is not all of the information contained in the Win32_Processor WMI object either. It’s only a subset that PowerShell thinks you may find useful. If you want a list of all the members that belong to this particular object we need to pipe the output of our last command into the Get-Member Cmdlet:

Get-WmiObject win32_processor | Get-Member

As a result we get a long list of members that belong to the Win32_Processor object. Here is a small sample from my machine:

members

We see that each member has a Name, MemberType and Definition. Most of the members in this object are of the “Property” type. These properties contain data about the object that we can access. We can also get a glimpse of the methods (specific to that object type) that are available for use on the object.

There are also other “MemberTypes” that we won’t discuss. We also won’t worry about the “Definitions” as they are mostly useful in lower-level .Net programming contexts.

In some cases it may be important understand what kind of .NET object were working with. Fortunately there is a method inherited from .Net’s System.Object class that applies to all objects called GetType(). We can call GetType() on an object to view the class it is derived from.

$CPUInfo = Get-WmiObject Win32_Processor
$CPUInfo.GetType()

Notice that we get an object back!

IsPublic IsSerial Name               BaseType
-------- -------- ----               --------
True     True     ManagementObject   System.Management.ManagementBaseObject


Now that we know how to figure out the object type we are working with and the properties and methods associated with it, Let’s build a custom object using Win32_Processor that only contains the data that I need. To do this I pipe the object into “Select”:

Get-WmiObject win32_processor | Select Name, Manufacturer

Which will return only information about the Name and Manufacturer of my CPU:

Name                                              Manufacturer
----                                              ------------
Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz          GenuineIntel

This is still an object but with only two members in it. Now suppose I want nothing more than the name of the processor which I can use as a string. I would then use the -ExpandProperty switch to isolate the property’s value:

Get-WmiObject win32_processor | Select -ExpandProperty Name

Which returns the string:

Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz

Finally, I can save an object to a variable that I can reference to obtain its properties at a later time using <object>.<member> notation. For example:

$CPUInfo = GetWmiObject Win32_processor | Select Name, Manufacturer

Then, if I only want to get the value of a particular property I can do so by:

$CPUInfo.Name

Which of course returns, as a string:

Intel(R) Core(TM) i7-4500U CPU @ 1.80GHz

Now you understand the basics of the PowerShell object paradigm. Check out Part 2: Do it Yourself.

InlineScripts and Variables in PowerShell Workflows

Quick Tutorial:

1) Note that not all PowerShell operations translate into Workflow activities
2) Place all PowerShell-only cmdlets into an InlineScript block
3) Pull Workflow variables into the InlineScript block by prefixing the variable with $Using:
4) Save the output of the inline script by assigning it to a Workflow variable

Workflow InlineScriptExample
{
    $WorkflowVariable
    $CapturedScriptOutput = Inlinescript
    {
        $Using:WorkflowVariable
    }
}

 


Full Tutorial:

In order to understand variables as they apply to workflows in the context of PowerShell scripting, it is necessary to realize that workflows are not typical PowerShell scripts. Workflows (As apposed to functions) utilize the Windows Workflow Foundation engine for managing activities. You can express many Workflow activities using the same statements already used in PowerShell (Thanks Microsoft!). For example, the functionally equivalent PowerShell function:

function test {
    Get-WmiObject win32_Processor
}

would be expressed as the workflow:

Workflow test {
    Get-WmiObject win32_Processor
}

The syntax is the same, the output is the same, but the way we arrived is different. This is all important to understand because it will help us to comprehend how variables can be moved between workflows and typical PowerShell scripts so that we can take advantage of the functionality of both.

When do I need to utilize PowerShell capabilities in my Workflow?

Suppose I want to perform a ToUpper() operation on a string I am using in my workflow.ToUpper() is a PowerShell operation that does not translate into a Windows Workflow activity. If I try to perform the operation, PowerShell ISE will instruct me to include the statement in an InlineScript block.

Workflow test
{
    InlineScript
    {
        #Insert PowerShell only commands here
    }
}

When I use an InlineScript, I am literally launching a PowerShell instance to do the work that I can’t do within the workflow context. For this reason, all variables must be hand-delivered (figuratively speaking) to the InlineScript, and any data we want to save out must be returned to the Workflow.

Passing Workflow Variables to InlineScripts and Capturing Return Values

Now that we understand what an InlineScript block is really doing, working with variables within it becomes simple. If I have a variable that I need to use within the InlineScript, I need to use the “Using:” scope modifier. The variable is then passed in. If then I need to get a value out of the InlineScript, I return the variable I need. For Example:

Workflow Get-AirForceValues
{
    $Values = "Integrity-Service-Excellence"

    #Starts InlineScript activity invoking a new process
    #Saves return data to $Values
    $Values = InlineScript
    {
        #Passes the $Values variable to the InlineScript
        $NewValues = $Using:Values

        $NewValues = $NewValues.Toupper()
        $NewValues
    }
    $Values
}

Running this Workflow then returns the string “INTEGRITY-SERVICE-EXCELLENCE”. You can see that we were able to make a workflow variable visible to the InlineScript and then get back data when the InlineScript terminated.

Creating and Adding Your Own Modules to PowerShell

Quick Tutorial:

  1. Make sure your script is written as a function, named in the proper Verb-Noun format
  2. Save the file as <Verb>-<Noun>.psm1 in a folder named <Verb>-<Noun>
  3. Save the <Verb>-<Noun> folder in a custom modules folder
  4. If your PowerShell profile for your current session does not yet exist. Create it, then edit it:
    New-Item -Path $Profile -Type File -Force
    Notepad $Profile
    
  5. Once open in notepad, add the following line to the profile:
    $Env:PSModulePath = $Env:PSModulePath + ";C:\Path\to\Custom\Modules"
  6. Save the profile and restart your PowerShell session

 


Full Tutorial:

“How can I create my own modules that I can run from the PowerShell Console without using Import-Module every time?” is the question that prompted this. Here’s the answer (I’m using PowerShell 4.0):

Create and Store Your Modules

First, I recommend that you create a file for each PowerShell function that you create. Name the function with the conventional camel-case <verb>-<noun> format used in PowerShell (e.g. RemoveArchEnemy).

example1 Use Recommended verbs as described here:
Approved Verbs for Windows PowerShell Commands

Then, name your file the same as your function name and use psm1 as the file type. This indicates that the file is a module. For example, if your function was called Remove-ArchEnemy then your file would be

Remove-ArchEnemy.psm1

example21

Give your module its own folder, also named after the name of the function, then find a place to keep your modules forever. A good example would be:

C:\Users\<username>\Documents\WindowsPowerShell\Modules

Using this, a complete path to our example would then be:

C:\Users\<username>\Documents\WindowsPowerShell\Modules\Remove-ArchEnemy\Remove-ArchEnemy.psm1

example3

Setting Up PowerShell Profile and Adding Modules

PowerShell has a number of profiles. These correspond to different users and between the simple console and ISE. I won’t go into that here (For more on Profiles click here). For this exercise I will use the example of the PowerShell ISE environment which I use almost exclusively.

In the console type:

$Profile

My machine returns:

C:\Users\Jonathan\Documents\WindowsPowerShell\Microsoft.PowerShellISE_profile.ps1

Again, this will vary depending on the profile in use for your console. In reality, the profile is just a script that runs at the beginning of each PowerShell session.

You may notice that the path to the profile file listed doesn’t even exist. If this is the case than we need to create it. We then need to edit it. This can be done with two commands:

New-Item -Path $Profile -Type File -Force
Notepad $Profile

With the profile now ready for editing we need to add a value to the PSModulePath environment variable. This is where all of the valid module paths are located. Add the following to your profile:

$Env:PSModulePath = $Env:PSModulePath + ";C:\Users\Documents\WindowsPowerShell\Modules"

At the start of each PowerShell session the new path is now automatically added to the PSModulePath.

Save the file and restart PowerShell. Now, you should be able to call all of your custom functions from the PowerShell console. Also, this works just as well with Windows Workflow Foundation modules if you’re cool like that.