Move Maverick folder to external SD card on Android

Maverick is a great mapping tool. It’s very handy because it downloads map tiles to allow maps to be used offline, including Ordnance Survey (for those of us in the UK). However, the map tiles are images and you can end up with thousands and thousands of them – I have over 32,000 files taking up about 1.5GB. This is a lot of space to use on the internal storage of my HTC One M8. Especially when I have a 16GB microSD card available.

The HTC One M8 is a bit strange in that the internal storage is presented as /sdcard and the actual SD card mounts as /sdcard2 and /storage/ext_sd, this is a legacy thing/workaround from older version of Android as far as I can tell (where the “internal” storage was tiny (about 200MB on the HTC Desire I seem to remember) and the sdcard could only be used for certain apps and files. That latter limitation got lifted such that the “built in” flash storage is now mounted as /sdcard and thus the actual microSD card has to be called something else. To me it seems like a mess, but there you go, Android is very young in terms of computer history.

The method for moving the Maverick folder that worked for me on Android 4.4.2 is a bit fiddly, and probably requires root access (I’m not sure, and I’m not un-rooting my phone to test it!). The fiddle lies in the SD card “security” that Google introduced with this version of Android. This stops apps accessing folders they don’t “own” on the external SD card.

This is my trial and error version of the (not very good/broken) instructions from the Maverick support site.

  1. Install ES File Explorer
  2. Open ES File Explorer and browse to /sdcard
  3. You should see you current maverick folder. Long-press it so that it gets a tick in the box that appears to the right of it the screen (box only appears once you’ve long-pressed).
  4. Touch the three dots “More” button and then touch “Move to”.
  5. Press the back arrow until you see “/” in the list, then touch this. Then touch “storage”, followed by “ext_sd” and then “OK”.
  6. The folder will be moved – this may take some time. It’ll depend on how many tiles you have and how fast your microSD card is. Mine took about 15 minutes. I’m doing it this way because when I tried to copy the folder with the phone plugged into my laptop, the file copy kept just stopping randomly and wasn’t reliable. ES worked first time. I think this bit will require root due to the SD card security.
  7. Once the folder has been moved, still in ES File Explorer, browse to /storage/ext_sd and you should now see your moved maverick folder.
  8. Long-press the maverick folder and touch the “More” button again. This time choose “Associate app”. Wait for the list to populate then find and select Maverick and touch “OK”. This step allows Maverick to access the /storage/ext_sd/maverick folder, otherwise it’d be blocked by Android security. I think this bit may also require root access, again due to the SD card security in Android 4.4.
  9. The maverick folder should now have the Maverick compass icon superimposed onto it.
  10. Next you need to browse back to your original /sdcard and create a folder there called maverick. Inside this, create another folder called redirect and browse into that.
  11. Your should now be viewing the (empty) folder /sdcard/maverick with ES File Explorer. Press the + “New” button and choose “File”. Enter the file name as to.storage.ext_sd and press “OK” to create an empty file. Note that the file name mirrors the folder location in step 5, but using a dot instead of a forward slash, and prefixed with “to.” – you could move your maverick folder anywhere you like.
  12. That’s it! Open Maverick and it should pick up the moved folder and still have all your tracks, tiles, etc.
Posted in Android | Tagged , , , , , , , , , , , , | Leave a comment

Replace SSL Certificate in Dell OpenManage Server Assistant 7.3

Dell OpenManager Server Assistant is hardware health monitoring and configuration software that can be installed onto PowerEdge servers. It is very useful as it lets you see details of the hardware, along with any faults, plus can be used to configure various aspects of the hardware, e.g. the RAID controller. It also has an agent that can be queried by Dell OpenManage Essentials to provide centralised hardware alerts.

It runs via an HTTPS web server on port 1311, but by default (as with most things like this) uses a self-signed certificate. This leads to annoying certificate errors being generated by web browsers when you visit the site, e.g. Internet Explorer’s There is a problem with this website’s security certificate. where you have to click the Continue to this website (not recommended). link.

It’s quite easy to replace the certificate with one from your own in-house Certificate Authority. I’m using Active Directory Certificate Services. I have a root CA and an intermediate CA. The thing that caught me out was the way OMSA refers to the server certificate as the “root” certificate…

Procedure is as follows:

  1. Sign in to the OMSA site running on your server. Click Preferences, General Settings, X.509 Certificate.
  2. Pick Certificate Maintenance and click Next, then change the Select appropriate action drop down to Certificate Signing Request(CSR) and click Next.
  3. Copy all the text in the box to the clipboard. (Text starts with —–BEGIN NEW CERTIFICATE REQUEST—–)
  4. Go to your corporate CA, probably something like https://ca.rcmtech.co.uk/certsrv/ and click Request a certificate, then click advanced certificate request.
  5. Click the link Submit a certificate request by using a base-64-encoded CMC or PKCS #10 file, or submit a renewal request by using a base-64-encoded PKCS #7 file.
  6. Paste the CSR text you put on the clipboard in step 3 into the Saved Request textbox.
  7. Pick an appropriate certificate template – the Enhanced Key Usage should include Server Authentication (1.3.6.1.5.5.7.3.1).
  8. I like to add a Subject Alternative Name entry in the Additional Attributes box, this allows the certificate to be valid on just the server name, the fully qualified server name and the server IP address. The format is as follows:
    san:dns=myserver&dns=myserver.rcmtech.co.uk&dns=192.168.1.123
  9. Click Next. You’ll get a pop up asking if you want the site to perform a certificate operation, click Yes.
  10. Now you’ll be on the Certificate Issued page. OMSA needs the certificates to be in Base 64 encoded format, so click that radio button. You also need both the certificate for the server itself plus the chain of certificates including your CA root and intermediate CA.
  11. Click the Download certificate link and save the .cer file.
  12. Now also click the Download certificate chain link, and save the .p7b file.
  13. Go back to OMSA, you might need to sign in again as the default timeout is quite short. Click the X.509 Certificate heading under Web Server to return to the X.509 Certificate Management page.
  14. Now click the Import a root certificate radio button and click Next. Browse to the .cer file (I know this does not contain your CA root certificate… do it anyway!). Click Update and Proceed.
  15. Now you’re presented with another Browse button, this time pick the .p7b file, click Import.
  16. You should be told Successfully imported. <certfile>.p7b. Click the Activate the new certificate. button.
  17. The you’ll be told Click the restart button to activate the new certificate. If the new certificate is not active after restart, click the help button for steps to restore the previous certificate. So click the Restart to Activate New Certificate button. OMSA web server will restart. Click OK to the pop-up, then close the browser tab or click the Quit browser button.
  18. Give it a few seconds then re-visit the OMSA site, and you should find there are now no certificate errors present.
Posted in Applications, Hardware | Tagged , , , , , , , , , , , , , , | Leave a comment

HTC One M8 Dot View Cover Review

I’d read that the HTC Dot View case, code HC M100, could be annoying, but needed something to give my phone some protection as I bought the phone outright and need it to last as long as possible.

Openings

The phone fits snugly into the case, and has yet to work its way out. The case not only protects the front and back of the phone, but also wraps around the sides of the corners which would give it some protection if dropped. Aside from the corners (roughly 1cm of the side edge covered), the rest of the top, right-hand side, and bottom are left open, so you can still easily get to the power button, the IR Blaster isn’t covered, and volume control is still accessible, as well as the micro USB and headphone sockets. The left hand side of the phone is completely covered due to the hinge, but there’s nothing you need along that edge anyway. It has rear openings for both cameras, the flash/torch and the rear noise-cancelling microphone. The front has an opening for the front camera and also the light sensor. In theory the notification LED will show through one of the top speaker holes, but the LED is so tiny that it basically never lines up with the hole well enough for this to work. This isn’t a problem for me as I don’t rely on the LED anyway.

Front Cover

The front of the case contains a magnet that both allows the phone to know to go into Dot View mode and also to automatically wake and sleep as the case is opened and closed. It’s worth noting that the Dot View front cover is for the most part not completely perforated – the “dots” are recessed but there’s a clear plastic screen in the middle of the cover. The only holes that are perforated all the way through are those for the top and bottom loudspeakers.

Materials

The case itself is made from two materials. The rear cover is made from black flexible plastic with a soft-touch matte finish. The front cover is made from matte finish black rubber, this makes it grip and tends to pull the pocket lining out when I pull the phone out from my pocket, and it collects a certain amount of dust and fluff. That said, even if the outside gets a bit messy it does seem to be helping to keep the pocket fluff out of the loudspeaker holes, which were previous getting blocked up all the time.

Functionality

When the case is closed, if you press the power button the Dot View initially tells you the time, temperature, and gives you a weather symbol, then changes to show a list of the last three numbers to call or be called. You can select these by swiping left or right through the case. Other “case closed” actions include being able to answer or reject calls by swiping – due to the openings in the case you can make a call with the cover closed, and if you dial with the cover open you can then close it without it ending the call, so you don’t have to talk with the cover open. The swiping actions through the case cover seem to be a bit hit and miss, frequently the actions don’t register, which is annoying if you’re trying to answer an incoming call. But I now just open the case, answer the call with the green symbol as normal, then close the case and start talking.

On Valentine’s day this year the dot view background, normally just black, changed to some pink hearts, which was a nice touch. Going into the HTC Dot View app allows you to change the background theme to one of several included, or convert your own photo to a Dot View background. The app also allows you to customise some other features such as what notifications to show, the Dot View timeout, and whether to bypass security to show the call history even if the phone is locked.

Using the phone for apps, text, email etc. is fine: the front cover wraps around to the back and whilst it’s fairly rigid so doesn’t easily sit flat against the curved back of the phone, it still allows me to use the phone one-handed if I need to. The hinge is basically just the same rubber stuff that the front cover is coated in, but it’s moulded such that the hinge pulls the cover closed anytime you’re not actively holding it open. This would be handy if you dropped the phone.

Summary

To sum up, the Dot View cover is pretty good, but not perfect. The rubber effect on the front makes it hard to get the phone out of my pocket. But it is easy to use, and the Dot View concept is quite fun. The protection it gives to the phone’s screen, rear and corners looks to be good, and because the cover stays on all the time if you drop the phone whilst using it it’ll still be protected, unlike with my previous “pouch”-type cover where you had to take the phone out all the time to use it.

Posted in Hardware | Tagged , , , , , , , , , , , , | Leave a comment

Network Optimisation for Office 365 and other external or cloud services

These are some notes from a short video I just watched. Scanning through these notes will take less time than watching the whole 13 minute video. Plus I’ve added links and more info

Latency

PsPing is one of the SysInternals tools written by Mark Russinovich. It has a nifty feature that allows you to test latency using TCP rather than ICMP, which is what the regular ping command uses. This is beneficial because you frequently can’t ping out of corporate networks, and some external services block inbound ICMP too. Additionally, network devices often give ICMP traffic low priority, which can skew results. Usage: psping <site>:<port> e.g.
C:\sysint>psping outlook.office365.com:80
PsPing v2.01 – PsPing – ping, latency, bandwidth measurement utility
Copyright (C) 2012-2014 Mark Russinovich
Sysinternals – http://www.sysinternals.com


TCP connect to 132.245.226.34:80:
5 iterations (warmup 1) connecting test:
Connecting to 132.245.226.34:80 (warmup): 35.01ms
Connecting to 132.245.226.34:80: 39.88ms
Connecting to 132.245.226.34:80: 38.83ms
Connecting to 132.245.226.34:80: 39.75ms
Connecting to 132.245.226.34:80: 46.42ms

TCP connect statistics for 132.245.226.34:80:
Sent = 4, Received = 4, Lost = 0 (0% loss),
Minimum = 38.83ms, Maximum = 46.42ms, Average = 41.22ms

Low latency is important as it leads to a much snappier experience. If the latency is high, you can try using PsPing to test various parts of your network that the HTTP traffic is flowing through, e.g. proxy servers.

TCP Window Scaling

This can significantly improve the transfer speed when dealing with large amounts of data (e.g. uploading/downloading files, attaching files to web-based email etc.). TCP Window Scaling allows more data to be sent before an acknowledgement is required from the other end of the connection. This can help a lot on connections with high latency, as the default TCP window size is 65KB, whereas the maximum it can increase to with TCP Window Scaling is 1GB. TCP Window Scaling is defined in RFC1323.

This one is a little more fiddly to test for. You need to use packet capture software, such as Wireshark.

  1. Install and load Wireshark, then go to the Capture menu and chose Options. In the Capture Options dialogue box, type the following into the Filter text box:
    host <some ip address> e.g. host 192.168.1.10
    This helps to reduce the amount of traffic that Wireshark will capture down to just packets that are sent to or from the IP address you specified.
  2. Run the capture, do some stuff that involves network communication with the host you specified, then stop the capture.
  3. To check for TCP Window Scaling, in the main Wireshark window, type into the Filter box: tcp.window_size_scalefactor!=-1
    This will filter out all packets where there is no TCP Window Scaling, so if all the captured packets (in the top pane of the Wireshark window) disappear at this point, TCP Windows Scaling probably isn’t working between your PC and the host you tested against. If it is working, you’ll still have some packets left showing.
  4. To verify, select one of the packets in the top pane, and expand the Transmission Control Protocol line in the middle pane. You should see a few lines saying something like:
    Window size value: 59
    [Calculated window size: 15104]
    [Window size scaling factor: 256]

From Windows Vista onwards, the TCP/IP stack implements Receive Window Auto-Tuning, which allows the TCP Window Scaling value to change dynamically.

Note that on some Windows versions, if your network location is set to “Public”, Windows might be restricting the upper limit of TCP Window Scaling. You can check by using the command:
netsh interface tcp show heuristics
TCP Window Scaling heuristics Parameters
———————————————-
Window Scaling heuristics         : disabled
Qualifying Destination Threshold  : 3
Profile type unknown              : normal
Profile type public               : normal
Profile type private              : normal
Profile type domain               : normal

The above is from Windows 8.1 with default settings, and it’s all looking good – heuristics are disabled and all the profile types are set to normal. The profile to check for is profile type that matches your current network location. You can find your current network location via:
netsh advfirewall monitor show currentprofile
Private Profile:
———————————————————————-
RCMTechWiFi
Ok.

If you want to disable the heuristics (if it’s enabled, and your network location setting is showing as “restricted” or “highlyrestricted”) use the command:
netsh interface tcp set heuristics disabled

Having said all of the above, the people most likely to notice a difference are those with both higher latencies and higher bandwidth (e.g. 50+ms and 100+Mbps).

DNS

Cloud providers often give different DNS responses based on where you’re doing the lookup from. For example, I’m based in the UK and get the following:

nslookup outlook.office365.com
Server: router.asus.com
Address: 192.168.1.1

Non-authoritative answer:
Name: outlook-emeawest.office365.com
etc.

Note how it’s given me the answer as being EMEA (Europe, Middle East & Africa), which is correct for where I am. If you were in e.g. the US, Japan, etc. you should get a different response. The key here is that you want to make sure that your clients are connecting to the correct place for where they’re located. If you’re physically located in Japan, but due to some internal company network or VPN end up getting your cloud application via a boundary internet connection in Europe, all your data is going to be doing a big round trip from Japan to Europe and back, rather than just talking to the lower latency servers in a more local datacentre.

MTU

This operates at a different OSI layer to the TCP Window Scaling. It is the Maximum Transmission Unit, and tends to default to 1500 bytes. It consists of the TCP and IP headers (20 bytes each), plus the data payload. It is viewed via the command:

netsh int ip show int

Idx Met MTU State Name
— ———- ———- ———— —————————
1 50 4294967295 connected Loopback Pseudo-Interface 1
3 25 1500 connected WiFi
6 40 1500 disconnected Bluetooth Network Connection
7 5 1500 disconnected Local Area Connection* 3

I’m using the WiFi interface at the moment, so my MTU is 1500. However, the MTU that my PPPoE ASDL modem can cope with is 1492. So what happens? If Windows starts to communicate with a remote server, a thing called Path MTU Discovery happens, where the two machines try to determine the largest MTU that they can use without the packet being fragmented by a piece of network kit along the way.

Fragmentation leads to reduced throughput due to the “wasted” space taken up the TCP+IP headers, plus potentially some processing time. If the two machines at either end of the link can do 1500 but something in the middle can’t go that high (e.g. my ADSL modem) the device with the limiting MTU should send an ICMP message reporting its MTU. This happens until everything has been reduced to a level at which fragmentation is avoided. If this gets too small, the protocol overhead from the 40 byte header can become significant. There’s some extra detail on this plus calculations here.

However, as I’ve discovered, some network devices don’t work properly and never send the ICMP message (or the message is lost along the way). This leads to what is known as a Black Hole, whereby the “too large” packet is dropped, but the machines at either end of the link aren’t told. The effect to the end user or application is data just not getting through, but often only sometimes. In my case, this exhibited itself by web browsing and most SMTP email being fine, but emails sent from certain hosts were lost, the sender receiving a “bounce” message saying that the connection to my mail server had timed out.

Knowing that the ADSL modem was probably to blame, and knowing that its MTU is 1492, I changed my MTU to 1492 as well and the problems went away. The command to change the MTU is:
netsh interface ipv4 set subinterface “Local Area Connection” mtu=1492 store=persistent

There’s also an option to use Jumbo Frames, which increase the MTU as high as 9000 bytes. Again though, you need to make sure that both machines plus all parts of the network support this, or you’ll be wasting your time.

Posted in Networking, Performance, Windows | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

PowerShell: Find MTU

I had some issues caused by MTU recently, and decided to write a script to test for the Maximum Transmission Unit that a network or host could cope with.

I originally started off by using the PowerShell Test-Connection cmdlet, but then realised that it doesn’t allow you to set the DF bit (do not fragment) which is required for MTU testing, so I switched it out for the ping command.

The script works by trying buffer sizes that are calculated as the halfway point between a minimum and maximum value. These two points converge as the preceding test either passes or fails, which makes for a fairly swift determination of the MTU.

#set BufferSizeMax to the largest MTU you want to try, usually 1500 or up to 9000 if using Jumbo Frames
$BufferSizeMax = 1500
#set TestAddress to the name or IP address you wish to test against
$TestAddress   = "www.bbc.co.uk"

$LastMinBuffer=$BufferSizeMin
$LastMaxBuffer=$BufferSizeMax
$MaxFound=$false

#calculate first MTU attempt, halfway between zero and BufferSizeMax
[int]$BufferSize = ($BufferSizeMax - 0) / 2
while($MaxFound -eq $false){
    try{
        $Response = ping $TestAddress -n 1 -f -l $BufferSize
        #if MTU is too big, ping will return: Packet needs to be fragmented but DF set.
        if($Response -like "*fragmented*"){throw}
        if($LastMinBuffer -eq $BufferSize){
            #test values have converged onto the highest working MTU, stop here and report value
            $MaxFound = $true
            Write-Host "found."
            break
        } else {
            #it worked at this size, make buffer bigger
            Write-Host "$BufferSize" -ForegroundColor Green -NoNewline
            $LastMinBuffer = $BufferSize
            $BufferSize = $BufferSize + (($LastMaxBuffer - $LastMinBuffer) / 2)
        }
    } catch {
        #it didn't work at this size, make buffer smaller
        Write-Host "$BufferSize" -ForegroundColor Red -NoNewline
        $LastMaxBuffer = $BufferSize
        #if we're getting close, just subtract 1
        if(($LastMaxBuffer - $LastMinBuffer) -le 3){
            $BufferSize = $BufferSize - 1
        } else {
            $BufferSize = $LastMinBuffer + (($LastMaxBuffer - $LastMinBuffer) / 2)
        }
    }
    Write-Host "," -ForegroundColor Gray -NoNewline
}
Write-Host "MTU: $BufferSize"

Posted in Networking, PowerShell, Scripting | Tagged , , , , , , , , | Leave a comment

SBS 2008: Emails not received or bouncing

Small Business Server 2008 contains (amongst other things) Exchange 2007. This means that email handling is done on-premises rather than by an SMTP/IMAP/POP3/Exchange server hosted out on the internet somewhere.

This has good points and bad points. The good is that you have a lot of functionality and control, the bad is that it makes your email susceptible to issues that you wouldn’t get if your mail server wasn’t on the end of a (for most small businesses) broadband connection.

I recently encountered, and resolved, just one of these type of issues. Most emails were being received with no issues at all. However emails from one contact would always bounce. i.e. the external sender would get a bounce message back similar to:

From: Mail Delivery Subsystem <MAILER-DAEMON@c2bthomr15.btconnect.com>
Subject: Warning: could not send message for past 4 hours
Date: 26 January 2015 17:03:29 GMT
To: externaluser@someplace.co.uk
********************************************
** THIS IS A WARNING MESSAGE ONLY **
** YOU DO NOT NEED TO RESEND YOUR MESSAGE **
********************************************

The original message was received at Mon, 26 Jan 2015 12:35:02 GMT
from c2bthomr15.ncs.ibs-infra.bt.com [10.87.69.228]

----- The following addresses had transient delivery errors -----
<internaluser@rcmtech.co.uk>
Reporting-MTA: dns; c2bthomr15.btconnect.com
Arrival-Date: Mon, 26 Jan 2015 12:35:02 GMT
Final-Recipient: RFC822; <internaluser@rcmtech.co.uk>
Action: delayed
Status: 4.3.0
Remote-MTA: DNS; mail.rcmtech.co.uk
Diagnostic-Code: SMTP; 451 4.7.0 Timeout waiting for client input
Last-Attempt-Date: Mon, 26 Jan 2015 17:03:29 GMT
Will-Retry-Until: Tue, 27 Jan 2015 12:35:02 GMT

Another email sent automatically, daily, from an online service (rightmove.co.uk) also didn’t appear. Due to the sender of these emails being automated, there was no bounce message to look at, and the online service’s tech support were less than helpful.

The interesting thing about the bounce message above is that the sending MTA (Message Transfer Agent) is reporting a timeout from the SBS. Yet we know that the vast majority of other email is getting through fine. So what would cause this particular MTA to consistently fail, when others are consistently working? We also know that this same MTA that consistently fails when trying to talk to the SBS, must work perfectly well when talking to other SMTP servers.

The cause was down to MTU (Maximum Transmission Unit) size. The SBS was set to use the default MTU of 1500 bytes, which should be fine. I verified this by running the command:

netsh int ip show int
Idx Met   MTU   State       Name
--- --- ----- ----------- -------------------
1   50 4294967295 connected   Loopback Pseudo-Interface 1
10   20   1500 connected   Local Area Connection

Then I checked the MTU size setting on the ADSL modem/router:
WNR2000v3 MTU
Note how it is set to 1492, which is smaller than the 1500 set on the server.

For info, this is apparently because of the PPPoE protocol being used. The following is from the Netgear WNR2000v3 manual:

MTU – Application
1500 – The largest Ethernet packet size and the default value. This is the typical setting for non-PPPoE, non-VPN connections, and is the default value for NETGEAR routers, adapters, and switches.
1492 – Used in PPPoE environments

This ADSL connection is running PPPoE, hence the MTU is set to 1492.

This shouldn’t be a problem though. When the MTA and SBS start to communicate, they should negotiate the MTU size and adjust it if larger packets get lost. The problem is that if a network device along the path can only cope smaller packets than either of the two servers, it has to send a message back to the servers to tell them to reduce the packet size. As I have discovered, various ADSL routers evidently don’t do this, or some firewalls block the “reduce your MTU size” ICMP packets, and thus you get a black hole where the packets are just silently dropped. This explains why the MTA was reporting a timeout.

The fix was to change the MTU on the SBS to be the same size as that on the ADSL router, namely 1492. This is done via the command:

netsh interface ipv4 set subinterface "Local Area Connection" mtu=1492 store=persistent

and reboot.

Following this, emails are now being received from both the sources that had been failing.

Posted in Exchange, Networking, Windows | Tagged , , , , , , , , , , , , , , , | 2 Comments

Windows 10 free upgrade – not for Enterprise or RT editions

I’ve been looking for details on the announced free upgrade from Windows 7 and 8.1 to Windows 10 that was announced a little while ago. I can’t start making plans to upgrade people without knowing if it’ll actually be possible or not.

Today I found some small print that confirms that the free upgrade will not be available to the Enterprise editions of Windows, or RT editions. This isn’t entirely unexpected, and most home and smaller business users won’t be running Enterprise editions anyway, as until recently they were only available via a volume license agreement.

The RT thing is going to be a bit annoying if you have one of those devices, but I’ve always categorised them as a “fixed” OS device – if you want more than security updates and new apps you need to fork out for a new device, or perhaps as with Android phones, wait for the device manufacturer to release their customised version of the new OS rather than expecting to get it direct from Google.

Posted in Windows | Tagged , , , , , , , , , , , | Leave a comment