{infiniteZest}
// Articles. Tutorials. Utilities.
Home  |   Search  |   Login  
Categories Skip Navigation Links
New / All
AJAX
Apple
ASP.NET
.NET
Git
Google / Android
Python / IronPython
Miscellaneous
SQL Server
Swift - Integer Types
Summary
This article goes through the usage of various integer types in Swift (including Int and UInt).
 
Table of Contents

Int

Max and Min Int Values

Integers of different sizes

Unsigned Integers

Formatting Large Numbers in the Code

 
Article Series
Previous Article:
Test Your Knowledge: Swift - Comments and Header Docs
This article is part of the Series:
Swift - Introduction and Basics
Next Article:
Test Your Knowledge: Swift - Integer Types

Need for integers is pretty common in a program. They are used in loop counters and instance variables and constants that are used in countless scenarios. In Swift, the most commonly used integer type is Int.

Int

In Swift, the type Int represents both positive and negative integer numbers. So, this includes positive integers (1, 2, 3, …), negative integers (-1, -2, -3, …), and, of course, 0. Name of the type is Int (in Swift, first letter of the type should be capitalized, unlike in C, where integer is represented by int - lower case i).

var score: Int?

score = 0
score = 100
score = -100
// Int can have 0, positive, and negative values

let maxAttempts = 10
// By using Type Inference, Swift will assign the type Int to maxAttempts

Max and Min Int Values

On a 32-bit system, Int is a 32-bit number. A 32-bit number can hold values up to 4GB. Since Int can represent both negative and positive integers, the values range from -2GB to 2GB.

On a 64-bit system, Int is a 64-bit number. It’s a very large number: 2^64-1. Within that half will be negative numbers and the other half positive numbers. You can find out the limits of values that an Int can hold by using max and min static properties on the Int type.

print(Int.max)
// On a 64-bit architecture (arm64 - iPhone 5s, iPhone 6, etc.)
// prints: 9223372036854775807

On a 32-bit architecture (armv7 - iPhone 4s, iPhone 5, etc.)
// Prints: 2147483647 (as you can see that’s 2GB)

print(Int.min)
// On a 64-bit architecture, prints: -9223372036854775808

Older iPhones have 32-bit architectures. In the Build Settings section on Xcode, you would specify what architectures does your app support. You would typically choose armv7 and arm64. The architectures armv7 and armv7s are 32-bit architectures. These are used on iPhone 4s, iPhone 5, etc. And the 64-bit architecture, as represented by arm64 is used on iPhone 5s, iPhone 6, and other newer iPhones.

Integers of different sizes

As we discussed above, Int can be either a 32-bit number or a 64-bit number depending on the architecture of the system. However, there are types for representing integers of various sizes:

  • Int8
  • Int16
  • Int32
  • Int64

So, as the names suggest, Int8 is a 8-bit number. It will hold values from -128 to 127 (2^8 is 256). Similarly Int16 is a 16-bit number - it holds values from-32768 to 32767. On a 32-bit system, Int32 is same as Int. And on 64-bit system, Int64 is same as Int.

On all these types, you can use min and max to get the minimum and maximum values.

print(Int8.max)
print(Int8.min)

print(Int16.max)
print(Int16.min)

Generally speaking, you want to use Int in your application programming, even if the numbers are small or even if the numbers hold only positive values. This makes it easy to read and there is a nice portability between different architectures.

However, if you have to use the types that can hold smaller numbers, make sure the overflow errors are taken into account. Say, you defined the level as an Int8 and kept on incrementing. In order to prevent it from crashing, check whether it has reached the max before incrementing.

var level: Int8 = 127
if level < Int8.max-1 {
  level += 1
} else {
  print(“Max Level reached”)
}

If you try to assign larger numbers than a type can hold, there will be an overflow error.

let score: Int8 = 265

// Error: integer literal ‘265’ overflows when stored into ‘Int8’

Unsigned Integers

Similar to the discussion with the Int type, there are unsigned integer types as well. These types only hold the positive integers; they do not hold negative integers. These types are named:

  • UInt
  • UInt8
  • UInt16
  • UInt32
  • UInt64

Just like with Int type, the UInt is same as UInt32 on a 32-bit architectures and it is same as UInt64 on 64-bit architectures. So, UInt on 32 bit systems will go from 0 to 4GB. It can’t hold negative values like Int type.

print(UInt.max)
print(UInt.min)
// max: 18446744073709551615 (a very large number)
// min: 0

print(UInt8.max)
print(UInt8.min)
// max: 255
// min: 0

On the UInt types also there are min and max static properties.You can see that UInt8 goes from 0 to 255.

Similar to the suggestion above, in a typical application programming you wouldn’t use UInt data types. You would typically use Int, even if the variable or constant you defined will never hold negative values.

If you assign integer literal during the declaration of an integer, Swift will make it Int via Type Inference. It will be Int, even if the number is very small and positive.

let score = 100
// The type of score will be Int; even though Int8 or UInt8 will have sufficed

let score2 = Int16.min
let score3 = UInt8.max
// Here the type of score2 will be Int16
// Type of score3 will be UInt8

If you use min/max static variables on a type to assign a value during declaration/initialization, then those variables will get those corresponding types (as you can see in the code above). This is because the min/max variables are declared to be of the type where they exist.

For example, the min variable on Int16 is declared like this:

static var min: Int16 { get }
// Since min is of type Int16, the var this is assigned to will be type inferred as Int16 as well.

let score4: Int = Int16.min
// This will give error. Even though Int can hold values larger than Int16, there is no automatic type conversion in Swift. Here Int16.min returns a value of type Int16, and that cannot be assigned to score4, which is of type Int.

Formatting Large Numbers in the Code

Let’s say you have declared Int constants and assigned them to large values. Large values are difficult to read - it’s easy to get confused between hundred thousand and one million and ten million. Larger numbers are even more difficult to decipher.

For readability, you can add _ (underscore) as a separator between the digits in the number.

let maxPossibleScore = 1_000_000

print(maxPossibleScore)
// Prints: 1000000
// So, the underscores are only for programmer readability

As you can see above, the _ is used only for programmer readability. It will be treated like a number without those underscores in them (it is not a String). So, when you print, it will print without underscores.

Take a Quick Quiz on this Article

1. var score = 100 - What is the inferred type of score in Swift?



: It’s an integer
Question 1 of 5
Article Series
Previous Article:
Test Your Knowledge: Swift - Comments and Header Docs
This article is part of the Series:
Swift - Introduction and Basics
Next Article:
Test Your Knowledge: Swift - Integer Types
Bookmark and Share This

More Articles With Similar Tags
icon-swift-test.jpg
Tags: int, types, swift
Test on Integer types in Swift.
icon-swift-series.jpg
This series of articles goes through the basics of Swift. It introduces Playgrounds and REPL; Variables and Constants; Int, Float, Bool; Tuples, Type Aliases, Type Inference, Type Safety, and more.
icon-swift-article.jpg
Tags: bool, types, swift
This article discusses the boolean types in Swift and contrasts with how these types are used in C and Objective-C.
icon-swift-test.jpg
Tags: bool, types, swift
Test on Boolean Types in Swift.
icon-swift-article.jpg
This article talks about decimal numbers in Swift, including Float, Double, and CGFloat. Also talks about Binary, Octal, and Hexadecimal representations.
About  Contact  Privacy Policy  Site Map