Skip to content

Commit

Permalink
Minimize the word 'like' when other words would fit
Browse files Browse the repository at this point in the history
  • Loading branch information
carols10cents committed Oct 3, 2024
1 parent 13e7c88 commit 9572827
Show file tree
Hide file tree
Showing 7 changed files with 101 additions and 100 deletions.
4 changes: 2 additions & 2 deletions src/ch17-00-async-await.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ We could avoid blocking our main thread by spawning a dedicated thread to
download each file. However, we would eventually find that the overhead of those
threads was a problem. It would also be nicer if the call were not blocking in
the first place. Last but not least, it would be better if we could write in the
same direct style we use in blocking code. Something like this:
same direct style we use in blocking code. Something similar to this:

```rust,ignore,does_not_compile
let data = fetch_data_from(url).await;
Expand Down Expand Up @@ -121,7 +121,7 @@ to work concurrently on your own tasks.

The same basic dynamics come into play with software and hardware. On a machine
with a single CPU core, the CPU can only do one operation at a time, but it can
still work concurrently. Using tools like threads, processes, and async, the
still work concurrently. Using tools such as threads, processes, and async, the
computer can pause one activity and switch to others before eventually cycling
back to that first activity again. On a machine with multiple CPU cores, it can
also do work in parallel. One core can be doing one thing while another core
Expand Down
34 changes: 17 additions & 17 deletions src/ch17-01-futures-and-syntax.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,12 +5,12 @@ The key elements of asynchronous programming in Rust are *futures* and Rust’s

A *future* is a value which may not be ready now, but will become ready at some
point in the future. (This same concept shows up in many languages, sometimes
under other names like “task” or “promise”.) Rust provides a `Future` trait as a
building block so different async operations can be implemented with different
data structures, but with a common interface. In Rust, we say that types which
implement the `Future` trait are futures. Each type which implements `Future`
holds its own information about the progress that has been made and what "ready"
means.
under other names such as “task” or “promise”.) Rust provides a `Future` trait
as a building block so different async operations can be implemented with
different data structures, but with a common interface. In Rust, we say that
types which implement the `Future` trait are futures. Each type which
implements `Future` holds its own information about the progress that has been
made and what "ready" means.

The `async` keyword can be applied to blocks and functions to specify that they
can be interrupted and resumed. Within an async block or async function, you can
Expand All @@ -26,7 +26,7 @@ syntax. That’s for good reason, as we’ll see!

Most of the time when writing async Rust, we use the `async` and `await`
keywords. Rust compiles them into equivalent code using the `Future` trait, much
like it compiles `for` loops into equivalent code using the `Iterator` trait.
as it compiles `for` loops into equivalent code using the `Iterator` trait.
Because Rust provides the `Future` trait, though, you can also implement it for
your own data types when you need to. Many of the functions we’ll see
throughout this chapter return types with their own implementations of `Future`.
Expand Down Expand Up @@ -103,7 +103,7 @@ We have to explicitly await both of these futures, because futures in Rust are
Rust will show a compiler warning if you don’t use a future.) This should
remind you of our discussion of iterators [back in Chapter 13][iterators-lazy].
Iterators do nothing unless you call their `next` method—whether directly, or
using `for` loops or methods like `map` which use `next` under the hood. With
using `for` loops or methods such as `map` which use `next` under the hood. With
futures, the same basic idea applies: they do nothing unless you explicitly ask
them to. This laziness allows Rust to avoid running async code until it’s
actually needed.
Expand Down Expand Up @@ -152,9 +152,9 @@ whose body is an async block. Thus, an async function’s return type is the typ
of the anonymous data type the compiler creates for that async block.

Thus, writing `async fn` is equivalent to writing a function which returns a
*future* of the return type. When the compiler sees a function like `async fn
page_title` in Listing 17-1, it is equivalent to a non-async function defined
like this:
*future* of the return type. When the compiler sees a function definition such
as the `async fn page_title` in Listing 17-1, it’s equivalent to a non-async
function defined like this:

```rust
# extern crate trpl; // required for mdbook test
Expand Down Expand Up @@ -242,7 +242,7 @@ high-throughput web server with many CPU cores and a large amount of RAM has
very different different needs than a microcontroller with a single core, a
small amount of RAM, and no ability to do heap allocations. The crates which
provide those runtimes also often supply async versions of common functionality
like file or network I/O.
such as file or network I/O.

Here, and throughout the rest of this chapter, we’ll use the `run` function
from the `trpl` crate, which takes a future as an argument and runs it to
Expand Down Expand Up @@ -282,7 +282,7 @@ keyword—represents a place where control gets handed back to the runtime. To
make that work, Rust needs to keep track of the state involved in the async
block, so that the runtime can kick off some other work and then come back when
it’s ready to try advancing this one again. This is an invisible state machine,
as if you wrote an enum like this to save the current state at each `await`
as if you wrote an enum in this way to save the current state at each `await`
point:

```rust
Expand Down Expand Up @@ -342,10 +342,10 @@ first.
Either future can legitimately “win,” so it doesn’t make sense to return a
`Result`. Instead, `race` returns a type we haven’t seen before,
`trpl::Either`. The `Either` type is somewhat like a `Result`, in that it has
two cases. Unlike `Result`, though, there is no notion of success or failure
baked into `Either`. Instead, it uses `Left` and `Right` to indicate “one or the
other”.
`trpl::Either`. The `Either` type is somewhat similar to a `Result`, in that it
has two cases. Unlike `Result`, though, there is no notion of success or
failure baked into `Either`. Instead, it uses `Left` and `Right` to indicate
“one or the other”.

```rust
enum Either<A, B> {
Expand Down
10 changes: 5 additions & 5 deletions src/ch17-02-concurrency-with-async.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ that our top-level function can be async.

> Note: From this point forward in the chapter, every example will include this
> exact same wrapping code with `trpl::run` in `main`, so we’ll often skip it
> just like we do with `main`. Don’t forget to include it in your code!
> just as we do with `main`. Don’t forget to include it in your code!
Then we write two loops within that block, each with a `trpl::sleep` call in it,
which waits for half a second (500 milliseconds) before sending the next
Expand Down Expand Up @@ -176,7 +176,7 @@ Sharing data between futures will also be familiar: we’ll use message passing
again, but this with async versions of the types and functions. We’ll take a
slightly different path than we did in Chapter 16, to illustrate some of the key
differences between thread-based and futures-based concurrency. In Listing 17-9,
we’ll begin with just a single async block—*not* spawning a separate task like
we’ll begin with just a single async block—*not* spawning a separate task as
we spawned a separate thread.

<Listing number="17-9" caption="Creating an async channel and assigning the two halves to `tx` and `rx`" file-name="src/main.rs">
Expand Down Expand Up @@ -253,7 +253,7 @@ polling—that is, stop awaiting.

The `while let` loop pulls all of this together. If the result of calling
`rx.recv().await` is `Some(message)`, we get access to the message and we can
use it in the loop body, just like we could with `if let`. If the result is
use it in the loop body, just as we could with `if let`. If the result is
`None`, the loop ends. Every time the loop completes, it hits the await point
again, so the runtime pauses it again until another message arrives.

Expand All @@ -278,7 +278,7 @@ points on the `recv` calls.
To get the behavior we want, where the sleep delay happens between receiving
each message, we need to put the `tx` and `rx` operations in their own async
blocks. Then the runtime can execute each of them separately using `trpl::join`,
just like in the counting example. Once again, we await the result of calling
just as in the counting example. Once again, we await the result of calling
`trpl::join`, not the individual futures. If we awaited the individual futures
in sequence, we would just end up back in a sequential flow—exactly what we’re
trying *not* to do.
Expand Down Expand Up @@ -325,7 +325,7 @@ that async block, it would be dropped once that block ends. In Chapter 13, we
learned how to use the `move` keyword with closures, and in Chapter 16, we saw
that we often need to move data into closures when working with threads. The
same basic dynamics apply to async blocks, so the `move` keyword works with
async blocks just like it does with closures.
async blocks just as it does with closures.

In Listing 17-12, we change the async block for sending messages from a plain
`async` block to an `async move` block. When we run *this* version of the code,
Expand Down
26 changes: 13 additions & 13 deletions src/ch17-03-more-futures.md
Original file line number Diff line number Diff line change
Expand Up @@ -289,8 +289,8 @@ syntax for working with them, and that is a good thing.

When we “join” futures with the `join` family of functions and macros, we
require *all* of them to finish before we move on. Sometimes, though, we only
need *some* future from a set to finish before we move on—kind of like racing
one future against another.
need *some* future from a set to finish before we move on—kind of similar to
racing one future against another.

In Listing 17-21, we once again use `trpl::race` to run two futures, `slow` and
`fast`, against each other. Each one prints a message when it starts running,
Expand Down Expand Up @@ -449,14 +449,14 @@ directly, using the `yield_now` function. In Listing 17-25, we replace all those
</Listing>

This is both clearer about the actual intent and can be significantly faster
than using `sleep`, because timers like the one used by `sleep` often have
than using `sleep`, because timers such as the one used by `sleep` often have
limits to how granular they can be. The version of `sleep` we are using, for
example, will always sleep for at least a millisecond, even if we pass it a
`Duration` of one nanosecond. Again, modern computers are *fast*: they can do a
lot in one millisecond!

You can see this for yourself by setting up a little benchmark, like the one in
Listing 17-26. (This isn’t an especially rigorous way to do performance
You can see this for yourself by setting up a little benchmark, such as the one
in Listing 17-26. (This isn’t an especially rigorous way to do performance
testing, but it suffices to show the difference here.) Here, we skip all the
status printing, pass a one-nanosecond `Duration` to `trpl::sleep`, and let
each future run by itself, with no switching between the futures. Then we run
Expand All @@ -481,9 +481,9 @@ determine when it hands over control via await points. Each future therefore
also has the responsibility to avoid blocking for too long. In some Rust-based
embedded operating systems, this is the *only* kind of multitasking!

In real-world code, you won’t usually be alternating function calls with
await points on every single line, of course. While yielding control like this
is relatively inexpensive, it’s not free! In many cases, trying to break up a
In real-world code, you won’t usually be alternating function calls with await
points on every single line, of course. While yielding control in this way is
relatively inexpensive, it’s not free! In many cases, trying to break up a
compute-bound task might make it significantly slower, so sometimes it’s better
for *overall* performance to let an operation block briefly. You should always
measure to see what your code’s actual performance bottlenecks are. The
Expand Down Expand Up @@ -566,13 +566,13 @@ Failed after 2 seconds
```

Because futures compose with other futures, you can build really powerful tools
using smaller async building blocks. For example, you can use this same approach
to combine timeouts with retries, and in turn use those with things like network
calls—one of the examples from the beginning of the chapter!
using smaller async building blocks. For example, you can use this same
approach to combine timeouts with retries, and in turn use those with things
such as network calls—one of the examples from the beginning of the chapter!

In practice, you will usually work directly with `async` and `await`, and
secondarily with functions and macros like `join`, `join_all`, `race`, and so
on. You’ll only need to reach for `pin` now and again to use them with those
secondarily with functions and macros such as `join`, `join_all`, `race`, and
so on. You’ll only need to reach for `pin` now and again to use them with those
APIs.

We’ve now seen a number of ways to work with multiple futures at the same
Expand Down
20 changes: 10 additions & 10 deletions src/ch17-04-streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ its synchronous `next` method. With the `trpl::Receiver` stream in particular,
we called an asynchronous `recv` method instead, but these APIs otherwise feel
very similar.

That similarity isn’t a coincidence. A stream is like an asynchronous form of
iteration. Whereas the `trpl::Receiver` specifically waits to receive messages,
though, the general-purpose stream API is much more general: it provides the
next item like `Iterator` does, but asynchronously. The similarity between
iterators and streams in Rust means we can actually create a stream from any
iterator. As with an iterator, we can work with a stream by calling its `next`
method and then awaiting the output, as in Listing 17-30.
That similarity isn’t a coincidence. A stream is similar to an asynchronous
form of iteration. Whereas the `trpl::Receiver` specifically waits to receive
messages, though, the general-purpose stream API is much more general: it
provides the next item the way `Iterator` does, but asynchronously. The
similarity between iterators and streams in Rust means we can actually create a
stream from any iterator. As with an iterator, we can work with a stream by
calling its `next` method and then awaiting the output, as in Listing 17-30.

<Listing number="17-30" caption="Creating a stream from an iterator and printing its values" file-name="src/main.rs">

Expand Down Expand Up @@ -103,7 +103,7 @@ as in Listing 17-31.

With all those pieces put together, this code works the way we want! What’s
more, now that we have `StreamExt` in scope, we can use all of its utility
methods, just like with iterators. For example, in Listing 17-32, we use the
methods, just as with iterators. For example, in Listing 17-32, we use the
`filter` method to filter out everything but multiples of three and five.

<Listing number="17-32" caption="Filtering a `Stream` with the `StreamExt::filter` method" file-name="src/main.rs">
Expand Down Expand Up @@ -168,7 +168,7 @@ Message: 'j'
```

We could do this with the regular `Receiver` API, or even the regular `Iterator`
API, though. Let’s add something that requires streams, like adding a timeout
API, though. Let’s add something that requires streams: adding a timeout
which applies to every item in the stream, and a delay on the items we emit.

In Listing 17-34, we start by adding a timeout to the stream with the `timeout`
Expand Down Expand Up @@ -221,7 +221,7 @@ available.
Instead, we leave `get_messages` as a regular function which returns a stream,
and spawn a task to handle the async `sleep` calls.

> Note: calling `spawn_task` like this works because we already set up our
> Note: calling `spawn_task` in this way works because we already set up our
> runtime. Calling this particular implementation of `spawn_task` *without*
> first setting up a runtime will cause a panic. Other implementations choose
> different tradeoffs: they might spawn a new runtime and so avoid the panic but
Expand Down
Loading

0 comments on commit 9572827

Please sign in to comment.