Logo
blank Skip to main content

Rust Programming Language Tutorial (Basics), Part 3

C++

Considering the growing popularity of the Rust programming language, we’ve decided to share our experience programming with Rust. Rust is a systems programming language that provides better safety and performance compared to other popular languages.

This article is the third part of our Rust programming language tutorial, and will be useful for anyone who wants to get familiar with Rust.

In our previous two articles, we provided a look at such Rust features as zero-cost abstractions and move semantics and guaranteed memory safety.

This third part of our Rust programming tutorial looks closer at the features that help programmers eliminate data races in threads and reduce code duplication with trait-based generics.

Threads without Data Races

During early development of Rust, it was discovered that the borrow checker (responsible for general memory safety) is also capable of preventing data races between threads. In order for a data race to occur, three conditions must simultaneously hold:

  • two or more threads are concurrently accessing a memory location
  • at least one thread is writing there
  • at least one thread isn’t synchronized

The borrow checker ensures that at any point in time a memory location has either any number of read-only references or exactly one writable reference, thus preventing the first and second conditions from occurring together.

However, while references are the most common way to read and modify memory, Rust has other options for this, so the borrow checker isn’t a silver bullet. Furthermore, risks to thread safety aren’t limited to data races. Rust has no special magic to completely prevent general data races, so deadlocks and poor usage of synchronization primitives are still possible and certain knowledge is still required to avoid them.

The Rust compiler prevents the most common issues with concurrency that are allowed by less safe programming languages like C or C++, but it doesn’t require garbage collection or some background, under-the-hood threads and synchronization to achieve this. The standard library includes many tools for safe usage of multiple concurrency paradigms. There are the following tools:

  • message passing
  • shared state
  • lock-free
  • purely functional

Related services

Kernel and Driver Development

Passing Messages with Channels

Channels are used to transfer messages between threads. Ownership of messages sent over a channel is transferred to the receiver thread, so you can send pointers between threads without fear of a possible race condition occurring later. Rust’s channels enforce thread isolation.

Here’s an example of channel usage:

Rust
use std::thread;
use std::sync::mpsc::channel;
// First create a channel consisting of two parts: the sending half (tx)
// and the receiving half (rx). The sending half can be duplicated and
// shared among many threads.
let (tx, rx) = channel();
  
// Then spawn 10 threads, each with its own sending handle.
// Each thread will post a unique number into the channel.
for i in 0..10 {
    let tx = tx.clone();
    thread::spawn(move || {
        tx.send(i).unwrap();
    });
}
// Now in the main thread we’ll receive the expected ten numbers from our
// worker threads. The numbers may arrive in any arbitrary order, but all
// of them will be read safely.
for _ in 0..10 {
    let j = rx.recv().unwrap();
    assert!(0 <= j && j < 10);
}

An important thing here is that messages are actually moved into the channel, so it’s not possible to use them after sending. Only the receiver thread can re-acquire ownership over the message later.

Read also:
WinAPI IOCP Programming: Scalable File I/O

Safe State Sharing with Locks

Another more traditional way to deal with concurrency is to use a passive shared state for communication between threads. However, this approach requires proper synchronization, which is notoriously hard to do correctly: it’s very easy to forget to acquire a lock or to mutate the wrong data while holding the correct lock. Many languages even go as far as removing support for this low-level concurrency style altogether in favor of higher-level approaches (like channels). Rust’s approach is dictated by the following thoughts:

  1. Shared state concurrency is an essential and totally valid programming style. It’s needed for system code to maximize performance and as a building block for high-level concurrency primitives.
  2. Most of the time, problems arise when a state is shared accidentally.

So Rust provides tools for using shared state concurrency in a safe but direct way.

Threads in Rust are naturally isolated from each other via ownership, which can be transferred only at safe points like during thread creation or via safe interfaces like channels. A thread can mutate data only when it has a mutable reference to the data. In single-threaded programs, safe usage of references is enforced statically by the borrow checker. Multi-threaded programs must use locks to provide the same mutual exclusion guarantees dynamically. Rust’s ownership and borrowing system allows locks to have an API that’s impossible to use unsafely. The principle “lock data, not code” is enforced in Rust.

The Mutex type in Rust is actually a generic over a type T data structure protected by the lock. When a Mutex is created, data is transferred into the mutex, giving up direct access to it:

Rust
// Let’s write a generic thread-safe stack using
// a synchronized vector as its backing storage:
struct ThreadSafeStack<T> {
   elements: Mutex<Vec<T>>,
}
  
impl<T> ThreadSafeStack<T> {
    fn new() -> ThreadSafeStack<T> {
        ThreadSafeStack {
            // The vector we created is moved into the mutex,
            // so we cannot access it directly anymore
            elements: Mutex::new(Vec::new()),
        }
    }
}

In order to get safe access to the data protected by the mutex, we need to use the lock() method, which blocks access until the mutex is acquired. This method returns a value: an instance of MutexGuard<T> which is a RAII-style guard. The guard will automatically release the mutex when it’s destroyed, so there’s no unlock() method:

Rust
impl<T> ThreadSafeStack<T> {
    // Note that the push() method takes a non-mutable reference to “self,”
    // allowing multiple references to exist in different threads:
    fn push(&self, value: T) {
        let mut elements = self.elements.lock();
        // After acquiring the lock, the mutex will remain locked until
        // this method returns, preventing any race conditions.
  
        // You can safely access the underlying data and perform any
        // actions you need to with it:
        elements.deref_mut().push(value);
    }
}

The key idea here is that lifetimes of any references to data protected by a mutex are tied to the lifetime of the corresponding MutexGuard, and the lock is held for the whole lifetime of the MutexGuard. In this way, Rust enforces locking discipline: lock-protected data can only be accessed when holding the lock.

Consider a typical mistake when using mutexes: a dangling reference to protected data remains alive after the mutex is unlocked. For example, you may want to add the following method to ThreadSafeStack:

Rust
    impl<T> ThreadSafeStack<T> {
    // Peek into the stack, returning a reference to the top element.
    // If the stack is empty then return None.
    fn peek(&self) -> Option<&T> {
        let elements = self.elements.lock();
        // Now we can access the stack data.
  
        // Handle the case of empty stack.
        if elements.is_empty() {
            return None;
        }
  
        // And if we have at least one element then return a reference
        // to the last one in the vector:
        return Some(&elements[elements.len() - 1]);
    }
}

However, the Rust compiler won’t allow this. It sees that the reference you’re trying to return will be alive longer than the time for which the lock is held, and will tell you exactly what is wrong with the code:

ShellScript
    error[E0597]: `elements` does not live long enough
  --> src/main.rs:45:22
   |
45 |         return Some(&elements[elements.len() - 1]);
   |                      ^^^^^^^^ does not live long enough
46 |     }
   |     - borrowed value only lives until here
   |

Indeed, if this were allowed, the reference you got may suddenly be invalidated when some other thread popped the top element off the stack. A possible race condition has been swiftly averted at compilation time, saving you an hour or two of careful debugging of a weird behavior that occurs only on Tuesdays.

Read also:
Using CUDA to Accelerate the Algorithms to Find the Maximum Value in a Range with CPU and GPU

Trait-Based Generics

Generics are a way of generalizing types and functionalities to broader cases. They’re extremely useful for reducing code duplication in many ways, but can call for rather involving syntax. Namely, generics require great care in specifying over which types they are actually considered valid. The simplest and most common use of generics is for type parameters.

Rust’s generics are similar to C++ templates in both syntax and usage. For example, the generic Rust data structure implementing a dynamically sized array is called Vec<T>. It’s a vector specialized at compile time to contain instances of any type T. Here’s how it’s defined in the standard library:

Rust
    // Definitions of generic structure type look like this
pub struct Vec<T> {
    buf: RawVec<T>, // the buffer representation
    len: usize,     // used size of the vector
}
  
pub struct RawVec<T, A: Alloc = Heap> {
    ptr: Unique<T>, // pointer to the actual buffer array of T
    cap: usize,     // allocated size of the buffer
    a: A,           // the allocator
}

A generic function max() that takes two arguments of any type T can be declared like this:

Rust
fn max<T>(a: T, b: T) -> T { /* … */ }

However, this definition is incorrect because we can’t in fact apply max() to any type. The maximum function is defined only for values which have some defined ordering between them (the one used by comparison operators like < and >=). This is where Rust generics are different from C++ templates.

In C++, requirements for template parameters are implicit:

C++
template <typename T>
T max(T a, T b)
{
    return a < b ? b : a;
}

This function will compile only when used with types that actually define the ‘operator<(), and will cause a compilation error otherwise:

Rust
test.cpp: In instantiation ofT max(T, T) [with T = Person]’:
test.cpp:19:46:   required from here
test.cpp:11:14: error: no match foroperator<’ (operand types arePersonandPerson’)
    return a < b ? b : a;
           ~~^~~

This error may be sufficient in simple cases like max() where type requirements and the source of the error are obvious, but with more complex functions using more than one required method, you can be quickly overwhelmed by large amounts of seemingly unrelated errors about obscure missing methods.

Rust makes generic type requirements explicit by using traits:

Rust
fn max<T: Ord>(a: T, b: T) -> T {
    if a < b { b } else { a }
}

Now max() can be used with any type T that implements the Ord trait. This trait defines the ordering essential for the implementation of max() and makes it possible to use the comparison operators. Explicit requirements enable the compiler to generate much more friendly error messages when a function is accidentally provided with arguments of an incorrect type:

ShellScript
    error[E0277]: the trait bound `Person: std::cmp::Ord` is not satisfied
  --> src/main.rs:16:19
   |
16 |     let bigger_person = max(person1, person2);
   |                         ^^^ the trait `std::cmp::Ord` is not implemented for `Person`
   |
   = note: required by `max`

Traits Define Type Interfaces

Traits are Rust’s way of defining interfaces. They describe what methods must be implemented by a type in order to satisfy the trait. For example, the Ord trait requires a single method cmp() that compares this value to another and returns the ordering between them:

Rust
    pub trait Ord: Eq + PartialOrd<Self> {
    fn cmp(&self, other: &Self) -> Ordering;
}

Traits may be derived from other traits. The Ord trait is derived from the traits PartialOrd (specifying partial ordering) and Eq (specifying equivalence relation). Thus, Ord may be implemented only for types that implement both the PartialOrd and Eq traits. Methods of parent traits are inherited by child trait implementations, so for example any Ord type can be compared with the == operator provided by the Eq trait.

Traits may also contain common implementations of methods provided to all concrete implementations of the trait. For example, the PartialOrd trait provides the implementation of the lt() method. This is the method actually called when the < comparison operator is used in code:

Rust
   pub trait PartialOrd<Rhs: ?Sized = Self>: PartialEq<Rhs> {
    // Method defining ordering between this value and some other value,
    // possibly returning None if such ordering is not defined.
    fn partial_cmp(&self, other: &Rhs) -> Option<Ordering>;
    // The lt() method will be automatically implemented like this
    // for all types that implement the partial_cmp() method:
    #[inline]
    fn lt(&self, other: &Rhs) -> bool {
        match self.partial_cmp(other) {
            Some(Less) => true,
            _ => false,
        }
    }
}

Aside from methods, traits can only contain type definitions. Like in Java or C#, traits define only interface requirements, not the data layout of concrete implementations of the interface.

Related services

Web Application Development Services & Solutions

Traits Implement Polymorphism

Together with generics, traits provide static (compile-time) polymorphism. The data layout for generic data structures and the actual implementation of generic methods and functions is selected during compilation time based on the known types of values. The resulting machine code is as efficient as it would be in case of manual specialization. Generics are a zero-cost abstraction mechanism.

But sometimes, you want to have generic code that acts differently based on real runtime value types. Rust implements dynamic (runtime) polymorphism via so-called trait objects.

For example, here’s how the Abstract Factory pattern looks in Rust:

First we need to define the interfaces:

Rust
    // Interfaces of the produced products
trait ProductA {
    fn do_foo(&mut self);
}
  
trait ProductB {
    fn do_bar(&mut self);
}
  
// Interface of the abstract product factory
trait ProductFactory {
    fn make_product_a(&self) -> Box<ProductA>;
    fn make_product_b(&self) -> Box<ProductB>;
}

Note that the factory produces Boxes with products. A Box is Rust’s equivalent of std::unique_ptr in C++. It denotes an object allocated on the heap and can contain trait objects that are actually pointers to the concrete implementation of a trait.

Then we define the concrete implementations of products and the factory:

Rust
    // Example of concrete implementation of products
struct ConcreteProductA;
struct ConcreteProductB;
  
// Implement trait ProductA for struct ConcreteProductA
impl ProductA for ConcreteProductA {
    fn do_foo(&mut self) {
        println!("ConcreteProductA doing foo");
    }
}
  
// Implement trait ProductB for struct ConcreteProductB
impl ProductB for ConcreteProductB {
    fn do_bar(&mut self) {
        println!("ConcreteProductB doing bar");
    }
}
  
// Example of concrete factory
struct ConcreteFactory;
  
// Implement trait ProductFactory for struct ConcreteFactory,
// creating some concrete products when asked
impl ProductFactory for ConcreteFactory {
    fn make_product_a(&self) -> Box<ProductA> {
        Box::new(ConcreteProductA)
    }
     
    fn make_product_b(&self) -> Box<ProductB> {
        Box::new(ConcreteProductB)
    }
}

Note how trait implementations are separated from declared structures. This is why Rust has traits, not interfaces. A trait can be implemented for a type not only by the module that declares the type but from anywhere else in the program. This opens up many possibilities for extending the behavior of library types if the provided interfaces don’t meet your needs.

Finally, we can implement abstract algorithms using the abstract factory to make abstract products and operate on them:

Rust
    fn make_and_use_some_stuff(factory: &ProductFactory) {
    let mut a: Box<ProductA> = factory.make_product_a();
    let mut b: Box<ProductB> = factory.make_product_b();
  
    a.do_foo();
    b.do_bar();
}

Here, the function make_and_use_some_stuff() doesn’t know the concrete types of the factory and products involved in computation. It operates only on trait objects. All method calls involve dynamic dispatch on the virtual method tables stored inside the trait objects. The function isn’t generic and only one implementation of it exists in the program.

Read also:
How to Use a Virtual CPU to Emulate and Analyze Suspicious Code Execution at Runtime

Traits May be Implemented Automatically

Some traits may be automatically derived and implemented by the compiler for user-defined data structures. For example, the PartialEq trait that defines the == comparison operator can be automatically implemented for any data structure provided that all its fields implement PartialEq too:

Rust
    #[derive(PartialEq)]
struct Person {
    name: String,
    age: u32,
}

The #[derive] attribute can be used with the following traits:

  • Comparison traits: Eq, PartialEq, Ord, PartialOrd
  • Clone, used to create a copy of a value via reference to it
  • Copy, which gives the type copy semantics instead of move semantics by default
  • Hash, used by many containers to compute a hash value of elements
  • Default, used for creating empty instances in a consistent way
  • Zero, defined for zero-initialization of numeric types
  • Debug, a trait used by the {:?} formatting string in debugging output

Related services

Outsource Software Development in C/C++

Conclusion

At Apriorit, we have vast experience in software programming, including working with software based on Rust. In this part of our Rust language programming tutorial, we’ve described how Rust features such as threads without data races and trait-based generics help programmers develop better software.

If you’re also interested in such Rust features as minimal runtime and efficient C bindings, take a look at the fourth part of our tutorial.

Tell us about your project

Send us a request for proposal! We’ll get back to you with details and estimations.

By clicking Send you give consent to processing your data

Book an Exploratory Call

Do not have any specific task for us in mind but our skills seem interesting?

Get a quick Apriorit intro to better understand our team capabilities.

Book time slot

Contact us