This last one, the data transfer rate, is currently a bottleneck for applications running on mobile phones and devices. When using a cellular data connection, every megabyte is measured by the network provider, and the user is often charged per megabyte used.
This has implications for developers making mobile apps that connect to the internet: if every piece of data transferred has a cost, then it is important to minimize the amount of data transferred. This, I argue, is a major reason why 'webapps' (where everything is provided by an external device) are not ideal. Local apps store some data on the mobile device, incurring a once-off cost of transfer, where as webapps must transfer all data every time they are used. This 'data' includes the design of the interface and the logic of the program, as well as the content of the app. This logic and interface data don't change that often, it is the content that changes. Local apps keep a copy of the unchanging data on the device, and only update the changing content - reducing the network usage.
If there were no cost for data transfer (and here I include the cost of waiting for slow transfers), the data that describes the app's interface and logic could be transferred every time the app is used. Minimizing transferred data is currently an important consideration, but perhaps in the future it won't be.
I imagine that this parallels other bottlenecks in the past. Such as the need to minimize data in memory in early computers, where every byte mattered. At that time, productive programmers had to be aware of exactly how memory was managed and used by their program, and the good ones knew many tricks for minimizing this. Eventually, though, these tricks became unnecessary as memory became effectively free (in relation to other bottlenecks) and programmers could spend their time focusing on other problems.