Search through Pwned Passwords with PowerShell

Troy Hunt recently released over 300 million SHA1 hashes of passwords that his Have I Been Pwned website has been collecting. The site allows you to search the database to see if your passwords are included in those from many data dumps and breaches. However, putting a valid password into a third party website, even one that’s claiming to do good things (and I’m sure it is) is a bad idea. The roughly 6GB of downloads allow you to search the cache of passwords yourself, on your own machine, which is much safer.

Loading these files into an editor to use the search function is not going to be easy though, so I wrote a script to search the file piece by piece.

At the moment there are three files, and I concatenated them using the Windows copy command and the /b switch:

copy file1.txt+file2.txt+file3.txt output.txt

How it works

This PowerShell script takes two parameters: The path to the password file, and the password to search for. It converts the password into a SHA1 hash, and then searches the file looking for that hash. It’s not fast, but does give you a very rough progress bar. Use an SSD, a fast processor (with turbo capability) and if you’re going to do multiple searches, more RAM than the size of the hashes text file plus plenty of room for your OS (Windows will cache the entire file in RAM if it can). The script reports if it’s found the hash of your password or not – you can test it with a password like qwerty or 123456 just to check as these are both in there.


Save the file, and specify the parameters on the command line:
.\SearchPwned.ps1 -PassFile = C:\users\me\documents\pwned-passwords.txt -Password “MySecretPassword”



$StringBuilder = New-Object System.Text.StringBuilder 
[System.Security.Cryptography.HashAlgorithm]::Create("SHA1").ComputeHash([System.Text.Encoding]::UTF8.GetBytes($Password)) | foreach{
$Hash = $StringBuilder.ToString() 
Write-Host "Searching for $Hash" -ForegroundColor Gray

# Do some rough maths to give an idea of progress
$FileSize = (Get-ChildItem -Path $PassFile).Length
$HashSize = $Hash.Length
$ChunkSize = 2000
$ChunkLength = $HashSize * $ChunkSize

$Found = $false
Get-Content -Path $PassFile -ReadCount $ChunkSize | foreach{
    $ChunkLengthRead = $ChunkLengthRead + $ChunkLength
    Write-Progress -Activity "Searching" -PercentComplete ($ChunkLengthRead/$FileSize*100)
    if($_ -match $Hash){
        $Found = $true
Write-Progress -Activity "Searching" -Completed

    Write-Host "Found" -ForegroundColor Red
    Write-Host "Not Found" -ForegroundColor Green
Posted in PowerShell, Security | Tagged , , , , , , , | Leave a comment

GUI to log off Remote Desktop users by non-admins

I’ve blogged about the issues with the well-intentioned but ill-though-out Remote Desktop Management Server concept in Windows Server 2012 (inc R2) before, trying to come up with workarounds to all the things you used to be able to do easily with tsadmin in previous version, that you now just cannot do.

Like delegate non-admin users (e.g. helpdesk, expert users) the ability to log off other users.

So here’s a PowerShell script that falls back on the (very) old but thankfully still perfectly functional quser and logoff commands. My suggestion is to create a group, put the helpdesk users who need this functionality into the group, then grant the group permission via the following command:

wmic /namespace:\\root\CIMV2\TerminalServices PATH Win32_TSPermissionsSetting WHERE (TerminalName ="RDP-Tcp") CALL AddAccount "domain\group",2

You need to run that on all your RDS servers.

Once the helpdesk staff are in the group they’ll need to log off the RDS server and back on again. Now you can give them the script to run.

The script uses quser to get the current user sessions on the server where it’s being run, parses it and displays it in a GridView (with multi-select). Selected users are then logged off via the (also old) logoff Command.
Get your helpdesk user to right-click the script, select “Run with PowerShell”, then just select one or more users to log off and click “OK”.

# Log off RDS user sessions
# For regular users to be able to do this you need to grant them permission:
# wmic /namespace:\\root\CIMV2\TerminalServices PATH Win32_TSPermissionsSetting WHERE (TerminalName ="RDP-Tcp") CALL AddAccount "domain\group",2

# Get the list of users using good old quser (query user)
$QUser = &quser.exe
# Get rid of the Sessionname column because it is inconsistent (contains no data for disconnected sessions)
$QUser = $QUser -replace("rdp-tcp#\d+","")
# Remove header row
$QUser = $QUser -replace("username.+time","")
# Tidy up the spaces to leave one space separator only
$QUser = $QUser -replace("\s+"," ")
# Remove the current user line prefix
$QUser = $QUser -replace(">"," ")
# Split into an array, data starts at position 3, 7 items per line
$QUserArray = $Quser -split " "
# Make an array of objects
$CurrentUsers = New-Object System.Collections.ArrayList
for ($i = 3; $i -lt $QUserArray.Count; $i+=7){
    $ThisUser = New-Object -TypeName System.Object
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name "UserName" -Value $QUserArray[$i]
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name "ID" -Value $QUserArray[$i+1]
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name "State" -Value $QUserArray[$i+2]
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name "IdleTime" -Value $QUserArray[$i+3]
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name "LogonTime" -Value ($QUserArray[$i+4]+" "+$QUserArray[$i+5])
    $CurrentUsers.Add($ThisUser) | Out-Null
# Display the array in a gridview
$SelectedUsers = $CurrentUsers | Out-GridView -Title "Select users(s) to log off" -OutputMode Multiple
# Log off selected sessions
foreach($User in $SelectedUsers){
    Write-Host ("Logging off "+$User.UserName+" (session ID "+$User.ID+")... ") -NoNewline
    $x = &logoff.exe $User.ID
    Write-Host "Done"
Start-Sleep -Seconds 1
Posted in PowerShell, Remote Desktop, Windows | Tagged , , , , , , , , , , , | 10 Comments

No more excuses, sort out your IT management basics!

I am fed up. This is a bit of a rant, but with good reason: companies and services that I and all of us pay good money for are not being managed properly. I say: Enough, no more excuses.

The ongoing reports in the media about assorted cyber attacks tend to all have just a few things in common:

  • Outdated and unsupported software
  • Supported software that has not been kept up to date
  • Administrator-level privileges routinely being used

Now this really is basic stuff.

Let’s ignore the detail of the attacks, almost none are using zero-days. That means no excuses. It just indicates lax systems management. Irresponsible, lazy, incompetent, poor resource allocation. IT departments – I blame you. Though it’s usually the managers: they hold the responsibility, and thus it is only fair to hold them accountable. And that is extremely worrying considering the human, financial and technical resources they already have at their disposal.

Let’s look at those three common issues in more detail. My solutions to these issues will be addressed in future posts.

Outdated and unsupported software

By this I am including the software that runs inside your hardware. Everything connected to your network runs software. Firewalls, printers, switches, display screens, HVAC, WiFi access points, all that BYOD stuff you got pressured into allowing because “everyone’s doing it”, storage systems, remote access controllers, your new fancy and eye-wateringly expensive security monitoring system. So none of this “oh but it’s an appliance so I don’t need to update it”. Oh yes you do, it’s probably running some version of Linux with a web interface (that’s now) full of holes. “But we bought it before anyone knew about this cyber stuff”. Right, so it’s 20 years old then? Didn’t think so. In any case, you know about it now so update it, replace it, or get it off the network.

Add this to your purchasing criteria, otherwise your next “appliance” might become a dangerous attack vector just a few weeks (if you’re unlucky) or months (lucky) after you buy it, and the only safe thing you can do is disconnect it, which tends to make these things pretty useless. And it’ll make you look stupid.

“But we didn’t have time to move off Windows XP before Microsoft cut support for it”. Rubbish. You just lack basic planning skills. Microsoft, as with many other software companies, provide support and (importantly) updates for their products based on product lifecycles. These state quite clearly when the different levels of support will end. These dates are announced when the product is first released, and historically were updated as Service Packs were released. Which means that you had many years of notice when the support end date was. Plenty for pretty much any budget or size of rollout. Note that if you’re only just now moving from XP to Windows 7 SP1 – fail – you “only” have until Jan 14th 2020, yes two and a half years, to get off it onto something newer (it was released mid 2009 – it’s currently eight years old, that is a long time in OS years!).

Oh, you could pay huge amounts of money for updates beyond the extended support date, but that’s rather adding financial injury to the insult already bestowed on your organisation by their IT management. And it doesn’t magically upgrade you to Windows “latest” – you still have to do that work yourself.

Outdated applications also make upgrading to newer OSs difficult or impossible, though there are almost invariably workarounds that range from OK to pretty nasty.

A final point to make about using outdated software is that frequently the mechanisms it uses are also outdated. Think old and compromised security protocols, requirements to access sensitive parts of the operating system, reliance on old compromised plugins and libraries, non-existence of modern security features, incompatibility with modern security features. You might have to turn off some of the security in your newer systems because otherwise all you old stuff won’t be able to talk to it! I bet you never get around to turning that on again, either.

Supported software that has not been kept up to date

“If we update it, it might break”. How about this: You don’t update it and some hacker/malware will break it for you. And break a whole load of other stuff too, and you won’t have a clue what’s been done to what, where your data’s gone/been sold, when it’ll resurface, or if you’ve even recovered properly. Assuming you do recover at all.

You do not want to be doing some crazy cocktail of updates a couple of times a year because that’s asking for trouble too. If your business units can’t tolerate a few minutes of downtime per PC and server once a month either a) they’re lying, or b) you bought the wrong system or designed it wrong.

It’s much better to just keep things up to date every month. Which in Windows, is default (i.e. it has to be deliberately disabled).

You’ll mostly be fine. In fact you’ll almost certainly be fine. I know, I’ve done this to thousands of PCs, hundreds of servers and applications with tens of thousands of users for many, many years. I am not just “lucky”.

And if you’re not fine, well that’s where good update management, recovery procedures, and SLAs with your suppliers come in. And at least you know what you did to break it. But 999 times out of a thousand nothing will break, and you’ll feel smug that you’re not getting hacked by months or years old vulnerabilities. Which makes you look stupid.

Administrator-level privileges routinely being used

Nasty, messy, dangerous, expensive. Do not give end users administrator privileges. No excuses. If they need them to run some horrible piece of software – time to update it to a slightly less horrible version (see above) or ditch it for a supplier that can actually write modern code (as opposed to something they’ve been banging away at in Visual Basic (not .NET)). If they want to run iTunes (insert any other non-work-related application here) on their work PC, either get your boss to agree with their boss that the costs involved to package, deploy and update this regularly are worth it (hint: they won’t be), or tell them “no” upfront. Simple. Running as administrator allows almost all security restrictions to be bypassed – even if your users don’t try it you can bet that bit of malware they just clicked on will.

Also, do not routinely (ideally ever) use domain administrator accounts on end-user devices. If something has managed to escalate it’s privileges locally, it is extremely easy to steal or impersonate other live credentials – and if those happen to be of a domain administrator then it’s game over. Don’t be lazy admins.

And while we’re on the subject, don’t use the same local administrator password on everything, each device should be different and should be changed regularly.

Rant over

This really is basic stuff, and yet we continue to see the reports in the news headlines. The people who should be doing taking care of this need to just actually crack on and do it. GDPR may help, but it’s really just a (very) big stick – if your personal data has been compromised, or a service you require can no longer function, the fact that a bunch of execs might go to prison still doesn’t get your data back or your service up and running again. Not until it’s scared enough of the people responsible to actually do their jobs properly, which sadly will take years.

If you work in IT and your department isn’t taking care of the above points, ask your boss why not. Get them to ask their boss why not. If you don’t like the answers (or don’t get any at all), leave before somebody tries to pin the impending disaster on you!

I’ll be posting simple methods of not getting caught out by the above points over the next few days so watch/follow/etc. so you don’t miss out.

Posted in Business, Security | Tagged , , , , , , , , , , , | Leave a comment

Function to invoke PowerShell ISE from shell

Another little function to add to your PowerShell profile.

If you’re in the PowerShell Integrated Scripting Environment you can use the command

psedit <filename>

to open the file in the ISE editor – and it doesn’t just have to be a .ps1 file, it’ll open anything.

But if you have a regular PowerShell prompt open, psedit doesn’t work. Until now!

function psedit ($Filename){
    & "$env:SystemRoot\System32\WindowsPowerShell\v1.0\powershell_ise.exe" $Filename

It’ll also work if you just type psedit with no filename, and will open a new tab in the editor.

Posted in PowerShell | Tagged , , , | Leave a comment

Get Uptime with Get-Uptime

I’ve been using this for ages, and have now finally got around to a) blogging it, and b) updating it to use Get-CimInstance. The latter makes it particularly easy to code and thus makes the function very compact due to the way it “nicely” handles the date.

So here it is in all its glory (!):

function Get-Uptime ([string]$ComputerName = $env:COMPUTERNAME){
    (Get-Date) - (Get-CimInstance -ClassName Win32_OperatingSystem -ComputerName $ComputerName).LastBootUpTime

It’ll default to the local computer unless you specify -ComputerName when calling it.

Personally, I have this in my PowerShell profile, so that it’s always available for use.

As you can see, it compares the LastBootUpTime property from the Win32_OperatingSystem CIM object with the current date & time and returns the difference (as an object – so you can do stuff with it like pipe it to Select-Object etc.).

The output looks something like:

Days              : 7
Hours             : 4
Minutes           : 15
Seconds           : 54
Milliseconds      : 139
Ticks             : 6201541391309
TotalDays         : 7.17770994364468
TotalHours        : 172.265038647472
TotalMinutes      : 10335.9023188483
TotalSeconds      : 620154.1391309
TotalMilliseconds : 620154139.1309

I’ll leave it as an exercise for the reader to make it work across time zones (it really won’t take you much effort!).

Posted in PowerShell | Tagged , , , , , , | Leave a comment

Remote PowerShell to an IP Address

PowerShell remoting works over the WS-Man protocol, which in Windows is implemented via WinRM. By default this uses Kerberos authentication, and in most domain environments everything “just works”.

If the machine isn’t correctly registered in DNS you’ll get an error:

Enter-PSSession : Connecting to remote server server001 failed with the following error message : WinRM cannot process
the request. The following error with errorcode 0x80090322 occurred while using Kerberos authentication: An unknown
security error occurred.
 Possible causes are:
  -The user name or password specified are invalid.
  -Kerberos is used when no authentication method and no user name are specified.
  -Kerberos accepts domain user names, but not local user names.
  -The Service Principal Name (SPN) for the remote computer name and port does not exist.
  -The client and remote computers are in different domains and there is no trust between the two domains.
 After checking for the above issues, try the following:
  -Check the Event Viewer for events related to authentication.
  -Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or
use HTTPS transport.
 Note that computers in the TrustedHosts list might not be authenticated.
   -For more information about WinRM configuration, run the following command: winrm help config. For more
information, see the about_Remote_Troubleshooting Help topic.
At line:1 char:1
+ Enter-PSSession -ComputerName server001
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (server001:String) [Enter-PSSession], PSRemotingTransportException
    + FullyQualifiedErrorId : CreateRemoteRunspaceFailed

As a result, or in another scenario, you may have the need to connect via a machine’s IP address instead of it’s name. This, by default, won’t work either and you’ll get a different error:

Enter-PSSession : Connecting to remote server failed with the following error message : The WinRM client
cannot process the request. Default authentication may be used with an IP address under the following conditions: the
transport is HTTPS or the destination is in the TrustedHosts list, and explicit credentials are provided. Use
winrm.cmd to configure TrustedHosts. Note that computers in the TrustedHosts list might not be authenticated. For more
information on how to set TrustedHosts run the following command: winrm help config. For more information, see the
about_Remote_Troubleshooting Help topic.
At line:1 char:1
+ Enter-PSSession -ComputerName
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: ( [Enter-PSSession], PSRemotingTransportException
    + FullyQualifiedErrorId : CreateRemoteRunspaceFailed

The fix for this is, as the error test suggests, to add the remote machine’s IP address to the TrustedHosts list, as follows:

Open an elevated PowerShell prompt and type:

Set-Item wsman:\localhost\Client\TrustedHosts -value

Then from a regular PS session:

Enter-PSSession -ComputerName -Credential (Get-Credential -UserName rcmtech\rcm-admin -Message "gimme the password")

Note that you have to specify credentials, per the example above.

Posted in PowerShell | Tagged , , , , , , | Leave a comment

Find out what’s really happening in your network with LogRhythm NetMon Freemium

I recently discovered that LogRhythm, as well as very a full-featured Security Intelligence Platform and SIEM, also provide a “freemium” Network Monitor. The “freemium” refers to that fact that the software is 100% free, full-featured and not time-limited, but is capacity-limited. Those limits are 1Gbps of network bandwith and three days of history.

Those limits are fine for if you don’t already have anything like this currently running in your network, and depending on what kind of stuff you pump over your network, for the average small office they’ll be fine and give you some great insights into what’s going on.


The hardware requirements state that the system should have a minimum of 8GB RAM (12GB recommended), 4 CPU cores (minimum of 2), and unless you just want to monitor traffic from the NetMon box itself you’ll need two NICs.

NetMon freemium is available to download either as a VirtualBox VM, or as an ISO image. I chose the ISO option as I had an old Dell PowerEdge 2950 lying around so decided to run the software on that. I tried using Rufus but could not get it to boot properly, so I gave up and burnt the ISO onto a DVD.

I’d recommend plugging one of your two NICs into a network with DHCP enabled, and leave the other one disconnected initially. NetMon is based upon CentOS 7, and installation is really straightforward.

Once installed, you’ll be given a logon prompt. Log in as logrhythm with a password of changeme. Use the command:
ip address
to see what IP address has been obtained from DHCP – look for the line beginning inet in a section with eth near the beginning, e.g. 2: eth0. If DHCP didn’t work, reboot to get it to try again. Then from a web browser, open https://<ip address> and you’ll get a LogRhythm sign in page. (If you get a pop-up authentication box, just click Cancel on it, I’m not sure why this sometimes appears). The web-based credentials default to a username of admin and a password of changeme. Change the password.

Now connect the second NIC to your switch. From the NetMon GUI, go to Configuration – Engine, and set the Input Interface. This should be set to one of the options starting netmap, I only had one: netmap:eno2 so I picked this. Click Apply Changes then go to Diagnostics – Interface and start to watch the Packet Rate graph, this updates every few seconds so you can see the data start to arrive once you’ve done the network configuration on your switch.

I’ve configured the switch port to be in Switched Port ANalyser mode (SPAN), which is a way of sending all the traffic from one port or more ports or VLANs to another port. My office PCs are all on a particular VLAN so I’ve chosen to send all this traffic to my SPAN port. On a Cisco switch you do this by creating a monitor session. You can have multiple ones of these, and you may already have some set up, so first check what you have:

sh run | inc monitor

and you may be shown some lines such as:

monitor session 1 source interface Gi1/0/2 , Gi1/0/4
monitor session 1 destination interface Gi1/0/46

This means that in order to create a new monitor session, I have to use session 2, as session 1 is already in use. NIC2 of my server is plugged into port Gi1/0/47, and aside from a description, there is no other configuration on this switch port. To send all VLAN10 traffic to this port I used the commands:

conf t
monitor session 2 source vlan 10 both
monitor session 2 destination interface g1/0/47

At this point I saw the Received line jump up from zero in the Packet Rate graph, so I knew the command had worked and NetMon was receiving data.

Analyse your data

So now you’re getting data, and by clicking on the LogRhythm logo you’ll be taken to the dashboard, by default showing you:

  • Top Applications by Bandwidth (histogram)
  • Top Applications by Bandwidth (pie)
  • Top Applications by Packet Count (pie)
  • Analyze table

This gives a good overview of what’s happening right now on your network. You can click into the pie chart sections to filter data immediately, or click onto a section of the histogram to build a filter that sets the protocol and time range to give you more detail. For example, what’s all that smb traffic at 11:43? Click the pale blue section of the bar, click Apply Now and I get the timeline broken down into five second slots, and the Analyze Table lets me go through the sessions and see what was talking to what. I can see more information by clicking the “up” arrow just underneath the histogram, this gives me a table which shows the amount of data sent by the selected protocol per second:

Click on the date on one of these rows, e.g. the one where 72.326MB was transferred, and it’ll filter again to that time but at 100ms granularity. I now only have one entry in the Analyze Table at the bottom, and can not only see the source and destination IP addresses (sadly it doesn’t convert these into DNS names). By expanding the line in the Analyze Table, I can see lots of info, most interestingly (because this is the SMB protocol) a list of all the filenames accessed. In my case, I can now see that this traffic was caused by a machine doing a full Active Directory Group Policy refresh, which is perfectly normal.

Over to you

I’ve found this product to be pretty good, it’s giving me insight into something that I had no visibility of until now, and I suspect most people would find the same. There are a lot more features that I’ve not looked at yet, such as alarms, customising dashboards, charts and tables.

There are some videos showing how to use NetMon on LogRhythm’s YouTube channel, and there’s also a community site you can sign up to for free for more resources and to ask for help. All in all, NetMon Freemium is going to be a nice additional tool for keeping an eye on my systems and data.

Posted in Networking, Security | Tagged , , , , , , , , , , , , , , , | Leave a comment

PowerShell Transcription to a file share breaks everything, and how to fix it

There’s been a bit of noise about PowerShell-based malware recently, and given the “assume breach” security mindset, I thought it was about time I enabled some of the PowerShell logging features in Windows. The basis behind “assume breach” is that you assume that your network security has already been breached and there are unauthorised things going on in your environment. The trick is to put in place suitable logging and monitoring to be able to detect and trace that activity.

The definitive source for PowerShell security config seems to still be the PowerShell (heart) the Blue Team blog post from mid-2015. This gives a lot of good info about what security features are available in PowerShell 5.0 and how to enable them in a sensible way. Most of them can be configured by Group Policy, e.g. transcription is enabled and configured via “Turn on PowerShell Transcription” in Windows Components – Administrative Templates – Windows PowerShell.

Based on that, and other things I’d read, PowerShell transcription to a network share seemed like exactly what I should have turned on. It gives you a detailed text log of everything that occurs in a PowerShell session, logged to an off-the-box location. Very handy.

So I turned it on.

And lots of things broke.

Enter-PSSession : Processing data from remote server ServerA failed with the following error message:
Could not find a part of the path '\\\PowerShellTranscript$'. For more information, see the
about_Remote_Troubleshooting Help topic.
At line:1 char:1
+ enter-pssession -ComputerName ServerA
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : InvalidArgument: (Server01:String) [Enter-PSSession], PSRemotingTransportException
+ FullyQualifiedErrorId : CreateRemoteRunspaceFailed

The problem seems to be with how I admin my servers (remotely, not on the console) and how the transcription feature works.

The root of my problems seem to be Kerberos double-hop authentication.

Consider how you might have done similar things in the past:

  • From your PC, remote desktop onto ServerA, access a share on ServerB. No problems.
  • From your PC, run PSExec to get a remote command prompt on ServerA, access a share on ServerB. No problems.
  • Set the logging folder of a service running as NT Authority\System on ServerA to write to a UNC path on ServerB, having granted the account ServerA$ permission to the share and granted NTFS permissions to the folder. No problems.
  • Forwarded Windows Event Logs from ServerA to ServerB, and noticed how details of people interacting with ServerA appeared in Forwarded Events on ServerB. No problems.

So why is PowerShell transcription different? The transcription is done by delegation of the credentials from my PC by ServerA and using those to try to access the share on ServerB. This won’t work: assume I’m running PowerShell as an admin on my PC (OK, not best practice, but fixing that is going to take a long time) and that I then run the following command:

Invoke-Command -ComputerName ServerA -ScriptBlock {Get-ChildItem -Path \\ServerB\c$}

This will fail with an Access is Denied (PermissionDenied) error.

There are various ways to make the double-hop work. As far as I can tell, the best balance of security and ease of use is to use Resource-Based Kerberos Constrained Delegation.

This works by configuring the Active Directory computer object of ServerB to allow it to accept delegated credentials via ServerA:

Set-ADComputer -Identity ServerB -PrincipalsAllowedToDelegateToAccount (Get-ADComputer -Identity ServerA)

Possibly followed by running the following on ServerA:

klist purge -li 0x3e7

but only if you’d tried and failed to use delegated credentials from ServerA in the past 15 minutes. If you’d not tried to do that, the klist command is unecessary.

Also, annoyingly, this method will not allow you to enter a remote session from your PC to ServerA and then create a remote session from ServerA to ServerB. Resource-based Kerberos constrained delegation does not support WinRM, you’ll get a 0x8009030e error (a specified logon session does not exist).

So if you want all your servers to send their PowerShell transcripts to ServerB, you need to add all your servers to the list of PrincipalsAllowedToDelegateToAccount for ServerB, and keep adding new servers to that list as they are created. Pain in the neck.

How I’d like transcription to work is that the transcripts are written by the computer account where the PowerShell is being executed. I would have thought that this was possible, because the computer account is the NT Authority\System account, and that is as powerful and privileged an account as you can get in Windows. That way I would just configure the transcripts file share to have write access for the Domain Computers group, and everything would be well with the world.

Posted in PowerShell, Security | Tagged , , , , , , , , , , , | Leave a comment

Collect user and group SIDs and names from Active Directory

Ever found yourself looking through the Access Control List of a file/folder/share and mixed in along with the group names (hopefully not user names!) you see some SIDs? These look something like S-1-5-21-0123456789-0123456789-0123456789-0123.
These are the Security IDs of deleted groups and users. Wouldn’t it be handy to have a list of these so you could work out what it was that used to have permission, but that’s now been deleted? Yes it would, so I wrote a basic PowerShell script to collect all the SIDs and user names and store them in an XML file. It also has a field for whather the item is a user or a group, and also the date and time the item was added.
You can run the script as a regular user, but will need to AD PowerShell cmdlets installed (possibly via RSAT if on a client OS).

$XMLFile = "C:\Users\Public\Documents\UsersAndGroups.xml"
# Get users and groups from AD
$ADUsers = Get-ADUser -Filter * | Select-Object -Property Name,SID
$ADGroups = Get-ADGroup -Filter * | Select-Object -Property Name,SID
# Create an array to store AD users and groups
$UsersAndGroups = New-Object -TypeName System.Collections.ArrayList
# Add users to array
foreach($User in $ADUsers){
    $ThisUser = New-Object -TypeName System.Object
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name SID -Value $User.SID.Value
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name Name -Value $User.Name
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name Type -Value "User"
    Add-Member -InputObject $ThisUser -MemberType NoteProperty -Name DateAdded -Value (Get-Date -Format s)
    $UsersAndGroups.Add($ThisUser) | Out-Null
# Add groups to array
foreach($Group in $ADGroups){
    $ThisGroup = New-Object -TypeName System.Object
    Add-Member -InputObject $ThisGroup -MemberType NoteProperty -Name SID -Value $Group.SID.Value
    Add-Member -InputObject $ThisGroup -MemberType NoteProperty -Name Name -Value $Group.Name
    Add-Member -InputObject $ThisGroup -MemberType NoteProperty -Name Type -Value "Group"
    Add-Member -InputObject $ThisGroup -MemberType NoteProperty -Name DateAdded -Value (Get-Date -Format s)
    $UsersAndGroups.Add($ThisGroup) | Out-Null
# Get existing data if it already exists
if(Test-Path -Path $XMLFile){
    $XMLData = Import-Clixml -Path $XMLFile
    $XMLDataArrayList = New-Object -TypeName System.Collections.ArrayList
    # Update existing data with new SIDs
    foreach($Item in $UsersAndGroups){
        if($XMLData.SID -contains $Item.SID){
            #Write-Host $Item.SID -ForegroundColor Green
            Write-Host $Item.SID -ForegroundColor Red
            $XMLDataArrayList.Add($Item) | Out-Null
    # Write updated data back to XML file
    $XMLDataArrayList | Export-Clixml -Path $XMLFile
    # Write first time data to XML file
    $UsersAndGroups | Export-Clixml -Path $XMLFile

To view the file, and be able to easily search it, just use:

Import-Clixml -Path "C:\Users\Public\Documents\UsersAndGroups.xml" | Out-GridView
Posted in PowerShell, Security, Windows | Tagged , , , , , , | Leave a comment