Windows Update and WSUS error 80244019

Some colleagues had recently performed a migration from an old Windows Server 2003 WSUS server to a new Server 2012 R2 one. I had just upgraded a new tablet from Windows 8.1 Professional to Enterprise, and was trying to update it as the update process doe not retain updates applied to the original OS. Group policy was telling the tablet to talk to my WSUS server (the one that had just been migrated). Windows Update on the tablet was finding updates, but getting an error 80244019 when trying to downlad them. Searching against Windows Update on the internet both found and was able to download updates.

I had a look in the Windows Update log file (C:\Windows\WindowsUpdate.log) and found some of the following:

2014-10-28	13:13:36:140	 368	1558	DnldMgr	***********  DnldMgr: New download job [UpdateId = {7F8FDE4D-B17B-4093-915E-A807DA18A2DE}.204]  ***********
2014-10-28	13:13:36:140	 368	1558	DnldMgr	WARNING: CheckIfDirExists returned error 0x80070002.
2014-10-28	13:13:36:140	 368	1558	DnldMgr	  * Queueing update for download handler request generation.
2014-10-28	13:13:36:140	 368	1558	DnldMgr	Generating download request for update {7F8FDE4D-B17B-4093-915E-A807DA18A2DE}.204
2014-10-28	13:13:36:140	 368	1558	IdleTmr	WU operation (GenerateAllDownloadRequests) started; operation # 635; does not use network; is at background priority
2014-10-28	13:13:36:140	 368	18ec	IdleTmr	Decremented idle timer priority operation counter to 2
2014-10-28	13:13:36:156	 368	1558	Handler	Generating request for CBS update 7F8FDE4D-B17B-4093-915E-A807DA18A2DE in sandbox C:\WINDOWS\SoftwareDistribution\Download\5cbfe1eec732bb919f7239386a1b893b
2014-10-28	13:13:36:156	 368	1558	Handler	Selected payload type is ptExpress
2014-10-28	13:13:36:156	 368	1558	Handler	Detected download state is dsStart
2014-10-28	13:13:36:156	 368	1558	Handler	Adding (entire file) to request list.
2014-10-28	13:13:36:156	 368	1558	Handler	Request generation for CBS update complete with hr=0x0 and pfResetSandbox=0 
2014-10-28	13:13:36:156	 368	1558	IdleTmr	WU operation (GenerateAllDownloadRequests, operation # 635) stopped; does not use network; is at background priority
2014-10-28	13:13:36:156	 368	1558	DnldMgr	***********  DnldMgr: New download job [UpdateId = {7F8FDE4D-B17B-4093-915E-A807DA18A2DE}.204]  ***********
2014-10-28	13:13:36:156	 368	1558	DnldMgr	WARNING: CheckIfDirExists returned error 0x80070002.
2014-10-28	13:13:36:156	 368	1558	DnldMgr	  * BITS job initialized, JobId = {EC94DB72-43C2-4A3D-9972-62715BD76BBF}
2014-10-28	13:13:36:156	 368	1558	DnldMgr	  * Downloading from to C:\WINDOWS\SoftwareDistribution\Download\5cbfe1eec732bb919f7239386a1b893b\ (full file).
2014-10-28	13:13:36:171	 368	1558	IdleTmr	WU operation (DownloadManagerDownloadJob) started; operation # 637; does use network; is not at background priority; will NOT stop idle timer
2014-10-28	13:13:36:171	 368	1558	IdleTmr	Incremented idle timer priority operation counter to 3
2014-10-28	13:13:36:171	 368	1558	DnldMgr	*********
2014-10-28	13:13:36:171	 368	1558	DnldMgr	**  END  **  DnldMgr: Begin Downloading Updates [CallerId = AutomaticUpdatesWuApp]
2014-10-28	13:13:36:171	 368	1558	DnldMgr	*************
2014-10-28	13:13:36:171	 368	18ec	AU	AU checked download status and it changed: Downloading is paused
2014-10-28	13:13:36:171	 368	148c	DnldMgr	WARNING: BITS job {EC94DB72-43C2-4A3D-9972-62715BD76BBF} failed, updateId = {7F8FDE4D-B17B-4093-915E-A807DA18A2DE}.204, hr = 0x80190194, BG_ERROR_CONTEXT = 5
2014-10-28	13:13:36:171	 368	148c	DnldMgr	  Progress failure bytes total = 177805422, bytes transferred = 0
2014-10-28	13:13:36:171	 368	148c	DnldMgr	  Failed job file: URL =, local path = C:\WINDOWS\SoftwareDistribution\Download\5cbfe1eec732bb919f7239386a1b893b\
2014-10-28	13:13:36:171	 368	148c	DnldMgr	CUpdateDownloadJob::GetNetworkCostSwitch() Neither unrestricted or restricted network cost used, so using current cost
2014-10-28	13:13:36:187	 368	148c	IdleTmr	WU operation (DownloadManagerDownloadJob, operation # 637) stopped; does use network; is not at background priority; will NOT start idle timer (task did not previously stop it
2014-10-28	13:13:36:187	 368	148c	IdleTmr	Decremented idle timer priority operation counter to 2
2014-10-28	13:13:36:187	 368	148c	DnldMgr	Error 0x80244019 occurred while downloading update; notifying dependent calls.

and also several of these:

2014-10-28	13:12:10:040	 368	1294	Misc	WARNING: WinHttp: SendRequestToServerForFileInformation failed with 0x80190194
2014-10-28	13:12:10:040	 368	1294	Misc	WARNING: WinHttp: ShouldFileBeDownloaded failed with 0x80190194
2014-10-28	13:12:10:040	 368	1294	Agent	WARNING: Fail to download eula file with error 0x80244019
2014-10-28	13:12:10:040	 368	1294	Misc	WARNING: WinHttp: SendRequestToServerForFileInformation failed with 0x80190194
2014-10-28	13:12:10:040	 368	1294	Misc	WARNING: WinHttp: ShouldFileBeDownloaded failed with 0x80190194
2014-10-28	13:12:10:040	 368	1294	Agent	WARNING: Fail to download eula file with error 0x80244019

I tried to retrieve one of the EULA text files referenced via Internet Explorer and got a 404 page not found error.

So then I did a bit of digging and came across this thread, where the guy fixed the problem by moving the location of his (misconfigured) WsusContent folder.

I checked out my new WSUS server and sure enough found a WsusContent folder within a WsusContent folder on the drive that had been specified during the WSUS migration/configuration.

D:\WsusContent had about 240GB of data in it, whereas D:\WsusContent\WsusContent only had about 1.6GB of data. D:\WsusData did include the EULA text file that I’d seen referenced in the log, and had received a 404 for via IE.

I didn’t like the idea copying the 240GB of data into the nested WsusContent folder, so thought I’d try fixing the problem by reconfiguring WSUS instead. It was actually fairly easy:

Change the following two registry values:

  1. HKLM\Software\Microsoft\Update Services\Server\Setup\ContentDir
    Change the REG_EXPAND_SZ to D:\ and restart the WsusService service. (note that this step may not be necessary due to the naming of the registry ke, i.e. “Setup”, but I’m doing it anyway to be tidy.
  2. HKLM\System\CurrentControlSet\Services\LanmanServer\Shares\WsusContent
    Change the REG_MULTI_SZ from D:\wsuscontent\WsusContent to D:\WsusContent and restart the lanmanserver service.

Next, in IIS Manager, expand <Servername>, Sites, Default Web Site, right-click Content, Manage virtual directory, Advanced Settings and change Physical Path to D:\WsusContent\ – I did an iisreset after this, but that may not be necessary.

Finally, I moved the contents of the erroneous D:\WsusContent\WsusContent folder into D:\WsusContent.

After those steps were completed, the text file was avaible via IE and my tablet is now happily updating itself from the WSUS server.

Incidentally, I like keeping the WsusContent folder on a separate drive as a) it can get quite big and I don’t want to risk my C: drive filling up, and b) it means I can use Windows Data Deduplication on it, where I’m currently getting about a 43% space reduction.



Posted in Windows | Tagged , , , , , , , | Leave a comment

Free Active Directory and Exchange 2013 knowledge from Veeam

Just found the following free stuff from Veeam, which both sound good:

  • Active Directory Expert Series [From Physical to Virtual]
    “Get the jumpstart on Active Directory: how to set it up, virtualize and ensure availability. Master backing up and restoring Active Directory with Veeam and the 5x MVP Sander Berkouwer, as well as how to mitigate risks in the larger part of your Domain Controllers’ lifecycles.”
  • Microsoft Exchange 2013
    “This course provides all the details you need for Exchange administration. You’ll learn about architecture and deployment,
    what’s new in version 2013, and security and disaster recovery.”

p.s. Thanks Veeam for the coffee machine that I won yesterday at IP Expo!

Posted in Free training, Windows | Tagged , , , , , , , | Leave a comment

PowerShell LDAP response time monitor

I recently had some issues with a system not getting very response times to LDAP queries sent to various Active Directory domain controllers.

These were resolved via a combination of Windows Server 2003 Server Performance Advisor and the built in, better, equivalent in 2008 R2 (found via Performance Monitor/Server Manager – Diagnostics – Performance – Data Collector Sets – System – Active Directory Diagnostics). It turned out that a new system was running an expensive full directory tree query against a non-indexed attribute and this was using all the CPU. I have now got the attribute indexed, the query optimised to only look in certain OUs, and given the DC VM an extra CPU.

Anyway, in order to try and notice this kind of thing in the future I have written a script that tests all my DCs by running a simple (if non-optimal) query every ten seconds, and log the results to a CSV file for easy graphing/analysis in Excel. It also sends me an email if any of the DCs take longer than a specified number of milliseconds to return the results of the query. And it rolls the log file over when it reaches a specified size. The script can be run on multiple machines simultaneously so that you can test response times form different parts of your network and/or different PCs/servers. Create the log file folder before running the script for the first time.

Script is as follows:

# Specify DCs manually, or get all of them automatically
#$DCs = "dc03","dc04","dc05","dc06","dc07","dc08"
$DCs = (Get-ADDomainController -Filter *).Name
$LDAPUserToQuery = "tstd68"
$Logfile = "C:\Logs\LDAPTimes.csv"
$LogRollSize = 1024000 # size is in bytes
$SMTPServer = ""
$MailTo = ""

function Test-LDAPQuery($User,$DC){
    # assume that the DC is in the same domain as the user running the test
    $Root = [ADSI] ("LDAP://"+$DC+"."+$env:USERDNSDOMAIN)
	$Searcher = New-Object System.DirectoryServices.DirectorySearcher $Root
	$Searcher.Filter = "(cn=$User)"
    # run the query and time how long it takes in milliseconds
	$Milliseconds = (Measure-Command {$Container = $Searcher.FindAll()}).TotalMilliseconds
    # this script doesn't do anything with the results of the query
    return $Milliseconds

    $ThisResult = New-Object System.Object
    $ThisResult | Add-Member -Type Noteproperty -Name DateTime -Value (Get-Date -Format G)
    foreach($DC in $DCs){
        $Milliseconds = Test-LDAPQuery -User $LDAPUserToQuery -DC $DC
        if($Milliseconds -gt 600){
            Send-MailMessage -From ("LDAPPerfTest@"+$env:COMPUTERNAME+"."+$env:USERDNSDOMAIN) -Subject ($DC+" took "+$Milliseconds+"ms") -Body "The log file can be found at $Logfile on the machine that sent this email" -To $MailTo -SmtpServer $SMTPServer
        $ThisResult | Add-Member -Type Noteproperty -Name $DC -Value $Milliseconds
    $ThisResult | Export-Csv -Path $Logfile -Append -NoTypeInformation
    $Log = Get-ChildItem -Path $Logfile
    if($Log.Length -ge $LogRollSize){
        Rename-Item -Path $Logfile -NewName ($Logfile.Replace($Log.Extension,"")+"_"+(Get-Date -Format s).Replace(":","-")+$Log.Extension)
    Write-Host "Waiting..."
    Start-Sleep -Seconds 10
Posted in Performance, PowerShell, Windows | Tagged , , , , , , , , , , , , , | Leave a comment

Why I’m not deploying Windows desktops using Remote Desktop Services

…even though some of the improvements in the Remote Desktop technology are excellent.


I’ve been running hundreds of thin client Windows desktops from Citrix for well over ten years, starting with Metaframe 1.8 on Windows NT4 Terminal Server Edition right through to XenApp 6.x on Windows Server 2008 R2. I’ve been using cheap low power Windows CE based thin clients, originally Wyse 3200LE and later HP 5000 series.
Server 2008 R2 gave a pretty good Windows 7 style desktop experience but the multimedia support was rubbish. HDX Mediastream for Flash was too cumbersome and had too many issues, it didn’t work with CE clients anyway. I wanted to roll out Lync but it wasn’t supported.

Then Server 2012 came along (shortly followed by 2012 R2) and with it all the new enhancements in Remote Desktop Session Host and RemoteFX. So I started to evaluate it as a replacement. The RemoteFX adaptive encoding works really well; areas of the screen are identified and encoded in different ways depending on the type of content they’re deemed to hold. USB devices (can be made to) work just like they do on a PC.

Video is all transcoded on the fly into H.264 which means that the client only needs to support H.264. This gets around the problem of needing something like Citrix to have “special” support for each codec – they only ever really did Windows Media properly, Flash was too much of a bolt on, and forget things like Quicktime and RealMedia. The problem with the RDP approach is that doing on-the-fly video transcoding requires a LOT of CPU power. I had originally read that this could be offloaded to a graphics card, which made sense, and I even did some tests. But the results and information from Microsoft were not promising. I was going to need a whole load of high-end blade servers to provide all that CPU (I’m talking dual socket E5-2690v2 CPUs).

Reason number 1: Local desktops are cheaper

Plus you need to have a client device that has enough grunt to decode the H.264 video stream. I didn’t really want to go down the Windows Embedded route as the management overhead with the Windows CE clients was so low, so turned to the Wyse ThinOS range. I was told that Lync support was coming, and things were looking good. Until we tried to make them work properly, via the connection broker. Several months later we had that sorted. Video performance was ok but we had some audio lag issues, most noticeable when people were speaking in shot – the audio wasn’t synced to their lip movements. Not great for employee training videos. There’s no way to limit the amount of CPU that’s used for the video transcoding either, so if you have ten users on a server and nine of them decide to watch some YouTube at lunchtime, user number ten who is trying to do some work suddenly has no CPU capacity free to run their LOB apps, yet if nobody is running any video you have CPUs sat at about 1% when people are just running Office apps. Then I found out that Lync support had been dropped. Then I discovered the Intel NUC. The unit with the Core i3-4010U processor is more than powerful enough to do H.264 video, even full screen to a 24″ monitor, and with a 64GB mSATA SSD and 4GB RAM makes an excellent fast booting client. Plus I get the Windows 8.1 client licence for no extra charge due to my licencing agreement. It works out cheaper than a ThinOS client too, yet has far more CPU grunt, better connectivity and much more flexibility. So then I started wondering if I needed to have all that high-end Remote Desktop Services server hardware as well… why not just run the desktop locally on the client…

Reason Number 2: RemoteApp works on local desktops

I also use my Citrix XenApp farm to publish corporate applications to desktops. These run seamlessly such that they look like they’re running on the end user’s desktop. This works pretty well, and is pretty much the only way I could support some of the more “unique” apps. I deliver them to the client PCs via the PN Agent/Online Plugin/Citrix Receiver (take your pick of the names) so the application availability is handled by making users members of Active Directory security groups and the PNAgent puts shortcuts onto the Start Menu, passthrough authentication means the apps “just start” when people click the shortcut. Microsoft introduced a feature called RemoteApp which does a similar thing, and Windows 7 and higher, plus Server 2012 and higher have a RemoteApp connection manager built in which performs a similar function to PNAgent. On Windows 8/server 2012 you can even configure it via Group Policy. Oh, unless your desktop is itself being delivered from a Remote Desktop Session Collection. Where they deliberately broke it. You can add in the URL manually, but that’s not great if you have a mix of Remote Desktop and Local Desktop clients – I want consistency. Oh, and if you’re using User Profile Disks, another great feature of Server 2012, that workaround fails. Yes you can use the RemoteApp web site to access your apps, but you can’t make it automatically accept your user credentials from the desktop session, and you don’t get any filetype associations.

Reason Number 3: No Lync support

I’d found out that Wyse had dropped Lync support from their ThinOS range, which was a shame, but was still considering building my own “thin client” running a very locked down Windows 8 that effectively launched a full screen Remote Desktop as soon as anyone logged on. I’d done this before with Windows NT 4 and it worked pretty well. Then I found out that Microsoft had decided not to support Lync at all in Session Host desktops – the only “remote desktop” support for Lync is via a hypervisor-based VDI solution, which provides far lower user densities than a Session Host (and thus higher costs). Lync is what we’re moving to for all our telephony, so Lync support for all desktops is mandatory.

Reason Number 4: Poor management and helpdesk support

My helpdesk staff can currently see a list of user sessions (be they logged on or disconnected) via the Citrix <insert current product name here> management console. This allows me to assign permissions to the helpdesk staff such that they can perform certain operations and not others, the most useful of which is shadowing (i.e. viewing a users’s desktop to ease troubleshooting and/or provide better application assistance). It also enables them to see which XenApp server the user is logged into, which helps the end users if a server develops a software fault – several users running from the same server calling with problems enables them to disable logons to that server and escalate to second level support. Remote Desktop Services doesn’t have this. Microsoft dropped the old tsadmin.exe utility, and replaced it with the so-called Remote Desktop Management Server. This is a total farce, is it’s just a “feature” of the Server 2012 Server Management GUI. You have to add ALL the Remote Desktop Services servers to the managed server pool, and if the back end teams add any extra RDSH servers (or remove any) the console breaks and must be configured by hand via the clunky GUI by every member of your helpdesk staff. It is utter rubbish. I wrote my own basic shadowing console using PowerShell, but now won’t need it – if you have a local desktop you just need the computername and you can shadow via Remote Assistance.


I am not going to be running full desktops via Remote Desktop Session Host. There are too many weirdly broken and oddly unsupported bits of critical functionality, plus the server hardware requirements are just too high and too uncontrollable.

RemoteFX is brilliant tech, User Profile Disks are a good innovation, I am so disappointed with the lack of a coherent desktop strategy from Microsoft. But ultimately their part-broken tech has saved me a load of money. I’ll still deliver applications from Server 2012 R2 via RemoteApp, but as there’ll be no video requirement I can carry on using the same X5560-based servers that I’ve had for the past five years or so – these have plenty of power for the application load. I might even ditch some of them as with no desktops at all I can run fewer RDSH servers – my application-only servers have far lower resource requirements than ones funning full desktops.

Posted in Hardware, Remote Desktop, Windows, XenApp | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , | 2 Comments

VMM: The version of virtualization software or the VMM agent is unsupported

I have been adding a Dell VRTX as a Hyper-V 2012 R2 cluster to my System Center Virtual Machine Manager 2012 R2 system.
I have two nodes M620 nodes and some shared storage in the VRTX chassis. I created the cluster, added the storage as a Cluster Shared Volume, then added the Hyper-V role onto each of the nodes manually.
Next I went to VMM and chose the option Add Hyper-V Hosts and Clusters. I gave it the cluster management name, and from this it identified the two nodes. The Discover clusters and their nodes job completed successfully, as did the Create new host cluster job. After that good start I then had two failures, one for each node:

Error (10400)
Before Virtual Machine Manager can perform the current operation, the virtualization server must be restarted.
Recommended Action
Restart and then try the current operation again.

These were followed by a “Completed w/ Info”, that actually contained two errors:

Error (441)
Agent communication is blocked. On, the version of virtualization software or the VMM agent is unsupported.
Recommended Action
Update the agent and virtualization software, and then try the operation again.
Error (441)
Agent communication is blocked. On, the version of virtualization software or the VMM agent is unsupported.
Recommended Action
Update the agent and virtualization software, and then try the operation again.
Warning (13926)
Host cluster was not fully refreshed because not all of the nodes could be contacted. Highly available storage and virtual switch information reported for this cluster might be inaccurate.
Recommended Action
Ensure that all the nodes are online and do not have Not Responding status in Virtual Machine Manager. Then refresh the host cluster again.

I checked that the firewall was correctly configured, and that WinRM was working correctly. This is easily done by running:

winrm quickconfig

and also by doing a PowerShell Invoke-Command to verify remote communication, e.g. I used:

Invoke-Command -Computername vrtxnode1 -ScriptBlock {Get-ChildItem -Path C:\}

Everything was fine, so I rebooted the nodes, per the errors above. This was a bit strange as the nodes had rebooted twice as part of the process of adding the Hyper-V role. Once they were back up, from the VMs and Services view in VMM, I right-clicked the first node and chose the Add host to cluster option. This completed “w/ info”, with an error from the second node again saying that agent communication was blocked. But the first node had added successfully. I then repeated the Add host to cluster operation on the second node, which also completed “w/ info” but only contained a warning about multipath I/O not being enabled. This is to be expected as the dual PERC8 option isn’t quite available yet, so I only have a single controller in my VRTX chassis.

Posted in Hyper-V, VMM 2012 R2 | Tagged , , , , , , , , , , , , , , | Leave a comment

How to shrink a Windows harddisk and migrate to SSD

I have been switching a lot of people from HDD (hard disk drives: spinning, magnetic disks) to SSD (solid state disk: silicon/flash-based) recently. It can breathe new life into an old PC/laptop, the HDD is almost invariably the slowest part of the system. SSD prices have come down a lot but they’re still quite a bit more expensive than HDDs.

Luckily, most people don’t have that much on their HDDs, and there is quite a lot of free space. For example, a business PC I upgraded recently had a 500GB HDD, and I was able to replace this with a 128GB SSD costing about £60, vs about £200-300 for a ~500GB SSD. There was only about 60GB of data on the disk.

From Windows Vista onwards, there is a “Shrink volume” option within Disk Management (diskmgmt.msc), which will reduce the size of a partition. If you can reduce the sixe of all the partitions down to less than the sixe of a smaller SSD, and still have a reasonable amount of free space, then you can save some money when you upgrade.

The trick is squashing the data up to the front of the disk. Windows tends to spread the data across the disk for various reasons, and the partition will only shrink down to where the last block of data is. Consider the following disk map:


Notice the annoying little block of data right at the end of the disk? That is stopping me making the partition any smaller. Luckily, Windows 7 onwards logs which file is preventing the partition from shrinking any further. You can then find out which process has the file open (use the file handle search in Process Explorer – Ctrl-F or click the binoculars on the toolbar), and close it. If the process belongs to a service then it’s a good idea to stop the service via Service Management (services.msc). Then it’s just a case of going through the process again and again until the partition shrinks sufficiently.

Alternatively, download the evaluation of PerfectDisk (or just buy it!) – it has a handy “Prep for shrink” option, which will try and squash the used blocks up towards the front of the disk. Plus, if any blocks are marked as “excluded” you can double-click on them and see what files they contain. Then you can use Process Explorer to identify the processes, and close them, as above.

PerfectDisk can also move the MFT, which is sometimes at the end of the disk – use the boot time defrag option. Aside from the MFT, you might want to temporarily turn off the page file (and reboot). Another thing which seems to keep files locked open is the Windows Search service. On one machine I went through all the running services and stopping anything that wasn’t absolutely necessary. I can’t give a definitive list as this will vary somewhat from machine to machine.

If the HDD is smaller than a sensible SSD, say 80GB HDD going to a 128GB SSD, then you don’t need to worry about any of that. Just clone the HDD over to the SSD – I’ve had success with CloneZilla running from a USB stick. You’ll obviously need a machine with two disk interfaces, which might mean you also need a desktop PC if you’re upgrading a laptop.

If the HDD is bigger, CloneZilla will not like you very much, even if the partition sizes are smaller than the SSD. You can still use it though but you’ll need to do a startup repair to the Windows installation on the SSD after the clone using the Windows installation media or the machine won’t boot from the SSD. Or use Symantec Ghost to do the clone.

You’ll probably want to expand the partition to fill the available space on the SSD, and you might want to switch the disk controller to AHCI mode in the BIOS too, but make sure you enable AHCI in Windows first or it’ll blue screen when it next boots.

Posted in Storage, Windows | Tagged , , , , , , , , , , , | Leave a comment

PowerShell script to get temperature from NetBotz

Wrote this to extract the temperature reading from a few NetBotz devices that I have in my datacentres.

Note that the second time I call the function, it is getting the temperature from a second (non-docked) sensor pod attached to the first NetBotz.

function Get-NetBotzTemp($Name,$URL,$User,$Pass){
    # set up web client and authentication
    $WebClient = New-Object System.Net.WebClient
    $CredCache = New-Object System.Net.CredentialCache
    $Credential = New-Object System.Net.NetworkCredential($CredUser,$CredPass)
    $CredCache.Add($URL,"Basic", $Credential)
    $WebClient.Credentials = $CredCache
    # get text contents of web page
    $WebPage = $WebClient.DownloadString($URL)
    # ditch the first part of the page, up until the numeric temperature value
    $LastPart = ($WebPage -split ',*_TEMP.+pic">')[1]
    # ditch the last part of the page, after the numeric temperature value
    $Farenheight = ($LastPart -split ' °F')[0]
    # NetBotz defaults to farenheit based on what it think the browser is, so convert the centigrade
    $Celcius = ($Farenheight - 32) / 1.8
    $Celcius = "{0:N1}" -f $Celcius
    # display output
    Write-Host ("Temperature in "+$Name+" is $Celcius")

Get-NetBotzTemp -Name "DC1" -URL "" -User "netbotz" -Pass "netbotz"
Get-NetBotzTemp -Name "DC2" -URL "" -User "netbotz" -Pass "netbotz"
Get-NetBotzTemp -Name "DC3" -URL "" -User "netbotz" -Pass "netbotz"

Posted in PowerShell, Scripting | Tagged , , , , , , | Leave a comment