PowerShell: Get SCCM Malware Detections to CSV file

Pulling detection data from dedicated Malware systems such as McAfee ePO is quite straightforward, simple, and fairly powerful.

The SCCM 2012 reports are arguably powerful, but whereas ePO has a fairly straightforward report creator built in, with SCCM you need to know how to use the (not very user-friendly for non-specialists) Microsoft Report Viewer and/or SQL Server Reporting Services to get exactly the data you want. I do not currently fall into the category of people who can write their own SSRS reports, but I can knock up an SQL query, so I decided to just pull the data direct from the SCCM SQL database.

The built-in Infected Computers report does show you which machines have had malware detected, but it groups them by computer, so you can’t easily get an overview of malware detections by date – if a machine has detected malware multiple times you have to drill into that machine to see all its detection dates. So whilst you can save the report output as CSV or Excel, it’s not good to me as I can’t get all the data in one place.

I like to be able to see detections over time, and store and analyse the results. For this I need raw data, so I wrote the following SQL query and PowerShell script that will pull the appropriate data out of SCCM and save it to a CSV file. When writing the SQL query, this SQL script was handy to find the tables containing the column references.

$SCCMSQL = "SCCMSQLServer.rcmtech.co.uk"
$SCCMDB = "CM_RCM"
$OutFile = "D:\SCCM Malware Detections.csv"

# Open connection to SCCM DB
$SCCMDBConnection = New-Object -TypeName System.Data.SqlClient.SqlConnection -ArgumentList "Server=$SCCMSQL;Database=$SCCMDB;Integrated Security=SSPI"
$SCCMDBConnection.Open()
# Get data
$SQLCommand = $SCCMDBConnection.CreateCommand()
$SQLSelect = "SELECT
    Computer_System_DATA.Name00 as ComputerName,
	DetectionTime,
    Users.UserName,
    Process,
    ThreatName,
    Path,
    EP_ThreatSeverities.Severity,
	EP_ThreatCategories.Category,
    CleaningAction,
    ExecutionStatus,
    ActionSuccess,
    PendingActions,
    ErrorCode,
    RemainingActions,
    LastRemainingActionsCleanTime
    FROM $SCCMDB.dbo.EP_Malware
    INNER JOIN $SCCMDB.dbo.Computer_System_DATA on EP_Malware.MachineID = Computer_System_DATA.MachineID
    INNER JOIN $SCCMDB.dbo.EP_ThreatCategories on EP_Malware.CategoryID = EP_ThreatCategories.CategoryID
    INNER JOIN $SCCMDB.dbo.EP_ThreatSeverities on EP_Malware.SeverityID = EP_ThreatSeverities.SeverityID
    INNER JOIN $SCCMDB.dbo.Users on EP_Malware.UserID = Users.UserID
    ORDER BY DetectionTime ASC"
$SQLCommand.CommandText = ($SQLSelect)
$SQLReader = $SQLCommand.ExecuteReader()
$SQLResultsTable = New-Object System.Data.DataTable
$SQLResultsTable.Load($SQLReader)
$SQLReader.Close()
$SCCMDBConnection.Close()
Write-Host ("Found "+($SQLResultsTable | Measure-Object -Line).Lines+" results")
$SQLResultsTable | Export-Csv -Path $OutFile -NoTypeInformation -Force
Posted in PowerShell | Tagged , , , , , , , , , , | Leave a comment

PowerShell: Get AD attributes

A while back I posted a script to monitor Active Directory LDAP response times. As part of this I had a chunk of code that not only did an LDAP lookup, but also pulled all the AD attributes into a PowerShell object. Here’s the code:

$User = $env:USERNAME
# assume that the DC is in the same domain as the user running the test
$DC = $env:LOGONSERVER.Replace("\\","")
$Root = [ADSI] ("LDAP://"+$DC+"."+$env:USERDNSDOMAIN)

$Searcher = New-Object System.DirectoryServices.DirectorySearcher $Root
$Searcher.Filter = "(cn=$User)"
# run the query
$Container = $Searcher.FindAll()

[System.Collections.Arraylist]$Names = $Container.Properties.PropertyNames
[System.Collections.Arraylist]$Properties = $Container.Properties.Values
$Obj = New-Object System.Object
for ($i = 0; $i -lt $Names.Count)
	{
		$Obj | Add-Member -type NoteProperty -Name $($Names[$i]) -Value $($Properties[$i])
		$i++
	}
$Obj.pwdlastset = [System.DateTime]::FromFileTime($Obj.pwdlastset)
$Obj.lastlogontimestamp = [System.DateTime]::FromFileTime($Obj.lastlogontimestamp)

$Obj
Posted in PowerShell | Tagged , , , , , , , | Leave a comment

Proper VMware Storage Quality of Service from Tintri

One complaint that I’ve had about mixed storage technology (i.e. SSD and HDD in the same box/same LUN/volume/store) is that an application’s performance can’t be guaranteed to be consistent.

For example, if a database has been happily running from SSD with nice low latency and high IOPS but then due to you cloning a load of new VMs, provisioning another system or some other storage intensive operation gets shoved down onto HDD, your database performance might well suffer quite considerably, this is sometimes referred to as the noisy neighbour scenario.

Conversely, if the database has been running consistently on HDD and then gets promoted to SSD, you might see not only a significant performance boost to the database, but also a significant CPU increase on the host, which might have a knock on effect to other VMs.

Note that most of the time you won’t see this type of effect; the tiering process is done very intelligently and especially in the case of Tintri, probably about 95%-100% of your IO will be coming from SSD permanently – and with Tintri, 100% of write IO always goes to SSD.

But  knowing that still doesn’t stop people worrying about fluctuating performance.

I was talking to Tintri about this at IP Expo last October, and whilst they had a feature that allowed you to guarantee a minimum level of IOPS for a VM (note: “VM” not LUN or datastore as with legacy SAN) they couldn’t limit the maximum IOPS. This has now changed and they provide full QoS (Quality of Service) to allow both maximum and minimum IOPS to be set on a per VM basis. There’s a datasheet that provides full details on how it works.

This ties in with the Tintri per-VM IO queue, whereas most other (particularly) LUN-based storage systems queue IOs per LUN or per datastore.

There’s a good video showing how it works here: https://youtu.be/BC2OvYWeknI

 

Posted in Hyper-V, Performance, Storage, vSphere | Tagged , , , , , , , , , , | Leave a comment

ClamAV False Positive OutlkLR.cab

Today my QNAP NAS daily ClamAV scan job reported that it had found a virus in a file called OutlkLR.cab which is part of the Office 2010 install DVD ISO (which I’ve  extracted on my NAS). The full path to the file is as follows:

SW_DVD5_Office_Professional_Plus_2010w_SP1_64Bit_English_CORE_MLF_X17-76756/Outlook.en-us/OutlkLR.cab

ClamAV thinks that the file is infected with BC.Exploit.CVE_2013_3940 (A Windows GDI remote code execution vulnerability)

I have reported the false positive to ClamAV.

This was with virus definitions dated 2015/04/08 18:35.

UPDATE 2015-04-10: This is resolved as of virus definitions dated 2015/04/09 18:51

Posted in Security | Tagged , , , , , , , , , | Leave a comment

Defrag your VMs to improve storage performance

Even if you have some kind of clever VM storage device such as a Tintri, ideally you should still defrag your VMs. If you’re still running VMs on a legacy SAN you should definitely keep them defragged.

I’m not going to go into all the details here, because there’s a great article that explains it all over at Virtual Strategy. In short though, defragged files and consolidated free space require far fewer IOs and thus allow the storage subsystem to operate far more efficiently.

Personally I have a site licence for Raxco PefectDisk, and it does a great job.

Posted in Hyper-V, Storage, vSphere | Tagged , , , , , , , , , , , | Leave a comment

Move Maverick folder to external SD card on Android

Maverick is a great mapping tool. It’s very handy because it downloads map tiles to allow maps to be used offline, including Ordnance Survey (for those of us in the UK). However, the map tiles are images and you can end up with thousands and thousands of them – I have over 32,000 files taking up about 1.5GB. This is a lot of space to use on the internal storage of my HTC One M8. Especially when I have a 16GB microSD card available. The HTC One M8 is a bit strange in that the internal storage is presented as /sdcard and the actual SD card mounts as /sdcard2 and /storage/ext_sd, this is a legacy thing/workaround from older version of Android as far as I can tell (where the “internal” storage was tiny (about 200MB on the HTC Desire I seem to remember) and the sdcard could only be used for certain apps and files. That latter limitation got lifted such that the “built in” flash storage is now mounted as /sdcard and thus the actual microSD card has to be called something else. To me it seems like a mess, but there you go, Android is very young in terms of computer history. The method for moving the Maverick folder that worked for me on Android 4.4.2 is a bit fiddly, and probably requires root access (I’m not sure, and I’m not un-rooting my phone to test it!). The fiddle lies in the SD card “security” that Google introduced with this version of Android. This stops apps accessing folders they don’t “own” on the external SD card. This is my trial and error version of the (not very good/broken) instructions from the Maverick support site.

  1. Install ES File Explorer
  2. Open ES File Explorer and browse to /sdcard
  3. You should see you current maverick folder. Long-press it so that it gets a tick in the box that appears to the right of it the screen (box only appears once you’ve long-pressed).
  4. Touch the three dots “More” button and then touch “Move to”.
  5. Press the back arrow until you see “/” in the list, then touch this. Then touch “storage”, followed by “ext_sd” and then “OK”.
  6. The folder will be moved – this may take some time. It’ll depend on how many tiles you have and how fast your microSD card is. Mine took about 15 minutes. I’m doing it this way because when I tried to copy the folder with the phone plugged into my laptop, the file copy kept just stopping randomly and wasn’t reliable. ES worked first time. I think this bit will require root due to the SD card security.
  7. Once the folder has been moved, still in ES File Explorer, browse to /storage/ext_sd and you should now see your moved maverick folder.
  8. Long-press the maverick folder and touch the “More” button again. This time choose “Associate app”. Wait for the list to populate then find and select Maverick and touch “OK”. This step allows Maverick to access the /storage/ext_sd/maverick folder, otherwise it’d be blocked by Android security. I think this bit may also require root access, again due to the SD card security in Android 4.4.
  9. The maverick folder should now have the Maverick compass icon superimposed onto it.
  10. Next you need to browse back to your original /sdcard and create a folder there called maverick. Inside this, create another folder called redirect and browse into that.
  11. Your should now be viewing the (empty) folder /sdcard/maverick with ES File Explorer. Press the + “New” button and choose “File”. Enter the file name as to.storage.ext_sd and press “OK” to create an empty file. Note that the file name mirrors the folder location in step 5, but using a dot instead of a forward slash, and prefixed with “to.” – you could move your maverick folder anywhere you like.
  12. That’s it! Open Maverick and it should pick up the moved folder and still have all your tracks, tiles, etc.

Update: This does not work now that I’ve upgraded to Android 5.0. Maverick can read the tiles from the microSD card, but can’t write new ones. It seems to keep trying to download the tiles over and over as the network throughput jumps up when you scroll to an area of the map where you have no tiles. So I’ve now had to move the maverick folder back to the internal storage again!

Posted in Android | Tagged , , , , , , , , , , , , | Leave a comment

Replace SSL Certificate in Dell OpenManage Server Assistant 7.3

Dell OpenManager Server Assistant is hardware health monitoring and configuration software that can be installed onto PowerEdge servers. It is very useful as it lets you see details of the hardware, along with any faults, plus can be used to configure various aspects of the hardware, e.g. the RAID controller. It also has an agent that can be queried by Dell OpenManage Essentials to provide centralised hardware alerts.

It runs via an HTTPS web server on port 1311, but by default (as with most things like this) uses a self-signed certificate. This leads to annoying certificate errors being generated by web browsers when you visit the site, e.g. Internet Explorer’s There is a problem with this website’s security certificate. where you have to click the Continue to this website (not recommended). link.

It’s quite easy to replace the certificate with one from your own in-house Certificate Authority. I’m using Active Directory Certificate Services. I have a root CA and an intermediate CA. The thing that caught me out was the way OMSA refers to the server certificate as the “root” certificate…

Procedure is as follows:

  1. Sign in to the OMSA site running on your server. Click Preferences, General Settings, X.509 Certificate.
  2. Pick Certificate Maintenance and click Next, then change the Select appropriate action drop down to Certificate Signing Request(CSR) and click Next.
  3. Copy all the text in the box to the clipboard. (Text starts with —–BEGIN NEW CERTIFICATE REQUEST—–)
  4. Go to your corporate CA, probably something like https://ca.rcmtech.co.uk/certsrv/ and click Request a certificate, then click advanced certificate request.
  5. Click the link Submit a certificate request by using a base-64-encoded CMC or PKCS #10 file, or submit a renewal request by using a base-64-encoded PKCS #7 file.
  6. Paste the CSR text you put on the clipboard in step 3 into the Saved Request textbox.
  7. Pick an appropriate certificate template – the Enhanced Key Usage should include Server Authentication (1.3.6.1.5.5.7.3.1).
  8. I like to add a Subject Alternative Name entry in the Additional Attributes box, this allows the certificate to be valid on just the server name, the fully qualified server name and the server IP address. The format is as follows:
    san:dns=myserver&dns=myserver.rcmtech.co.uk&dns=192.168.1.123
  9. Click Next. You’ll get a pop up asking if you want the site to perform a certificate operation, click Yes.
  10. Now you’ll be on the Certificate Issued page. OMSA needs the certificates to be in Base 64 encoded format, so click that radio button. You also need both the certificate for the server itself plus the chain of certificates including your CA root and intermediate CA.
  11. Click the Download certificate link and save the .cer file.
  12. Now also click the Download certificate chain link, and save the .p7b file.
  13. Go back to OMSA, you might need to sign in again as the default timeout is quite short. Click the X.509 Certificate heading under Web Server to return to the X.509 Certificate Management page.
  14. Now click the Import a root certificate radio button and click Next. Browse to the .cer file (I know this does not contain your CA root certificate… do it anyway!). Click Update and Proceed.
  15. Now you’re presented with another Browse button, this time pick the .p7b file, click Import.
  16. You should be told Successfully imported. <certfile>.p7b. Click the Activate the new certificate. button.
  17. The you’ll be told Click the restart button to activate the new certificate. If the new certificate is not active after restart, click the help button for steps to restore the previous certificate. So click the Restart to Activate New Certificate button. OMSA web server will restart. Click OK to the pop-up, then close the browser tab or click the Quit browser button.
  18. Give it a few seconds then re-visit the OMSA site, and you should find there are now no certificate errors present.
Posted in Applications, Hardware | Tagged , , , , , , , , , , , , , , | Leave a comment