Saving Data in iOS

May 31 2022 · Swift 5.5, iOS 15, Xcode 13

Part 1: Files & Data

05. Data & Data Types

Episode complete

Play next episode

About this episode

Leave a rating/review

See forum comments
Cinema mode Mark complete Download course materials
Previous episode: 04. Challenge: URLs Next episode: 06. Saving & Loading Data

Get immediate access to this and 4,000+ other videos and books.

Take your career further with a Kodeco Personal Plan. With unlimited access to over 40+ books and 4,000+ professional videos in a single subscription, it's simply the best investment you can make in your development career.

Learn more Already a subscriber? Sign in.

Notes: 05. Data & Data Types

Apple’s documentation on Numbers, Data, and Basic Values

This course was originally recorded in April 2020. It has been reviewed and all content and materials updated as of November 2021.

Heads up... You've reached locked video content where the transcript will be shown as obfuscated text.

In this video you'll learn more about Swift data types, how to view the type of a variable or object you have, and how each data type uses different amounts of bytes in memory or on disc. Talking about data can get a little confusing, because the term is overloaded. In Swift, you use the foundation type with the name data, but the bytes it uses themselves can be considered data in general computer science terms. In modern computer architectures, the the smallest unit of data you can address is a byte, which comprises eight bits. Swift has different types depending on the amount and type of data you need. This video's start playground has some variables, grouped in sections, to give you an example of some of the different data types available to you in Swift. There are integers for whole numbers, floats and doubles for decimal numbers, and an array of one of the available integer types to represent bytes. Integers represent whole numbers, and these can be signed or unsigned. Meaning they can be positive-only numbers from zero upwards, or both positive and negative numbers. Looking at the integer section of your playground, under data types, you can see there are two constants, one for age and one for height. What differentiates them is that one is of type Int8, or an 8-bit signed integer, while the other is of type UInt8, for an 8-bit unsigned positive-only integer. The UInt8 structure in Swift is also used to represent a byte or eight bits. With eight bits, the range of values you've got available per byte is two to the eighth power, which is 256. They start at zero and go to 255. How can you tell the amount of bytes a data type uses, and what its maximum and minimum values are? Fortunately for you, Swift has got you covered. You can use the memory layout enumeration in order to find out the amount of bytes a data type uses on your specific hardware or device. Its size method lets you pass in a variable or a constant to get this. The output in your sidebar should be one, which is the correct number of bytes for an 8-bit unsigned integer. Alternatively, you can also get the number of bytes for a type itself, and not a variable or constant. You should see a one in your sidebar, as they are the same underlying data type. Integer structures also have handy variables that let you get the minimum and maximum values you can store in them. The output is right in line with having two to the power of eight available for you. Given that it's an unsigned integer, meaning you can't store a negative number, then the values you can store are from zero to 255. But what about height? It will certainly defer somehow since it's a signed integer. The number of bytes it uses is still one, which is correct given that you're not using more bytes to store your data, only allowing for positive and negative numbers. What does change however, is that your minimum value is minus 128, and then maximum is 127. Still 256 possible values, except they now have to account for both positive and negative numbers. In addition to an 8-bit integer or a byte of data, you also have at your disposal Int, Int16, Int32 and Int64, along with their unsigned counterparts UInt, UInt16, UInt32 and UInt64. Of note on 64-bit systems, the amount of bytes that both Int and Int64 use is the same. It's eight bytes or, insert drum roll here, 64 bits. Floating points, represented by the float type in Swift for single precision floating point values, allow you to store numeric values with decimals in them. As with integers, they do have a limit on both the positive and negative end, and, unlike integers, you do not have multiple variants of float, just float. You can see the amount of bytes a float uses, as well as approximate minimum and maximum values it can hold. Floats use four bytes, and the values it can hold are large enough that you need to use exponentials to represent them. I said approximate values because the smaller or larger a floating-point gets, the more precision you lose. The precision depends on what underlying format is used to represent floating point values. Most computers and devices use a standard format known as the IEEE floating-point format. Which brings us to double, or double-precision floating point values. They too allow you to store decimal numbers, just much larger or smaller values, and thus with the more precision. Doubles are not really twice as precise as floats. It just derives from the fact that a double precision number uses twice the bits as a regular single precision number. You can check out the number of bytes a double uses just as you did for float. As expected, doubles use eight bytes of memory, and have larger and smaller possible values. You won't normally need to directly type out the numerical values of data, but you should be familiar with what that would look like in order to have a better understanding of the mechanics of saving. By default, when you use an integer literal in Swift, it's in base-10, a decimal number. So, you can directly use zero to 255. But you can prefix your integers as well to change their base. If you use the 0b prefix, you're working with a base-two literal. The B is for binary, which only uses zero and one for digits. In decimal, we need three digits to express the full range of a byte, but in binary we need eight. So it's easier to read if you split the number up with an underscore every four bits. A group of four bits is known as a nibble, and it's often represented using a base-16, also known as hexadecimal or just hex digit. Every nibble in decimal is a value between zero and 15, inclusive. Past the number nine we use the first six letters of the alphabet for hex literals, and the prefix for hex is 0x. In your playground, you will notice some examples of both positive and negative integers represented in binary or hexadecimal bases. They use the same amount of bytes, but all you're changing is how you represent the numbers themselves, nothing more. The favoriteBytes array contains some examples of regular decimal, binary and hexadecimal representations of integer numbers. Together, they make up bytes of data. And as with the other data types, you can see how much memory it occupies. The difference is that you can't just rely on the size method in memory layout, because, as the documentation states, the result does not include any dynamically allocated or out-of-line storage. But try the following so you're able to still see how much total memory it will use. The result, as you might have guessed, is 16 bytes. An accurate number considering you have an array of 16 UInt8 values each using up one byte of memory. As you've seen, Swift has many different types to help you represent your data, and together they open up a world of possibilities. Even though today's devices have quite a bit of RAM and drive memory, you should still try to use the appropriate data type depending on your needs. In the next video you'll take your favorite bytes and store them as data in your playground's document directory. See you there.