Bo2SS

Bo2SS

5 Inter-Process Communication

Course Content#

IPC - Inter-Process Communication

——Shared Memory——#

  • The parent agrees on the shared data location before giving birth
  • Related interface: shm*

shmget#

Allocate a System V type shared memory segment [Segmented Memory]

[PS] The communication method of System V is still in use; its startup method has been abandoned

  • man shmget
  • Prototype
    • Image
    • Return value type: int
    • Parameter types: key_t, size_t, int
    • ipc → Inter-Process Communication
  • Description
    • Image
    • The key passed in and the id returned are linked together
    • The size of the new shared memory segment corresponds to the integer page size [Ceiling]
    • Creating shared memory requires a flag: IPC_CREAT
    • Image
    • If IPC_CREAT is not present, it will check the user's access permissions for the segment corresponding to the key
    • Image
    • Note: Permissions must also be specified with at least 9 bits, such as 0600, the leading 0 cannot be omitted
  • Return value
    • Image
    • On success, returns a valid memory identifier [id]; on error, returns -1 and sets errno
  • After obtaining the id through the key, how to find the corresponding address? shmat 👇

shmat, shmdt#

Shared memory operations [attach, attach; detach, detach]

  • man shmat
  • Prototype
    • Image
    • Note that the return value of shmat is: void *
  • Description
    • shmat
      • Image
      • Attaches the shared memory segment specified by the id to the calling process's own address space
        • Each process believes it is using a separate contiguous memory block [Virtual Memory Technology]
      • shmaddr specifies the attachment address:
        • NULL: System automatically attaches [Common]
        • Otherwise, attaches to the automatically rounded down address [SHM_RND defined in shmflg] or manually ensures page-aligned addresses
      • Image
      • A successful call will update the shared memory structure shmid_ds
        • shm_nattach: Number of attachments. The real physical space corresponding to shared memory may be attached by multiple processes
    • shmdt
      • Image
      • It is the opposite operation of shmat
      • The operation must be on the currently attached address
  • Return value
    • Image
    • On success, shmat returns the address, shmdt returns 0
    • On error, returns -1 and sets errno
  • The id is unique across the entire system, the same id must correspond to the same physical memory
    • [Note] The same id returns different addresses through shmat, because it manifests as independent address spaces in different processes [Concept of Virtual Memory]
  • Speaking of which, shmget obtains the id through the key, shmat obtains the memory address through the id, but how is the [key] obtained? 👇

ftok#

Converts a pathname and a project identifier into a System V IPC key

  • man ftok
  • Prototype
    • Image
    • Requires a pathname and an int type variable to complete the conversion
  • Description
    • Image
    • The file must exist and be accessible
    • proj_id must have at least 8 valid bits and cannot be 0
    • The same filename and proj_id correspond to the same return value
      • Therefore, fixed input parameters return a fixed key
  • Return value
    • Image
    • On success, returns key; on failure, returns -1 and sets errno

shmctl#

Control of shared memory

  • man shmctl
  • Prototype
    • Image
    • cmd: command, int type. Predictably some uppercase macro definitions
  • Description
    • Image
    • You can see the detailed information of the shmid_ds structure
    • Image
    • Some commands
      • IPC_STAT: Copy shmid_ds structure information to buf [must have read permission]
      • IPC_SET: Modify shmid_id structure
      • IPC_RMID: Mark the segment to be destroyed
        • The segment will only be truly destroyed when the shared memory segment is not attached
        • No buf variable is needed, just set it to NULL
        • [PS] Must check if it is really destroyed, otherwise there may be remnants [can be checked through return value]
      • IPC_INFO [Linux specific, to avoid compatibility issues, try to avoid using]
  • Return value
    • Image
    • Generally: 0, success [except INFO, STAT]; -1, error

From the thread library [pthread]

Mutex, condition variables

[PS] Multithreading, high concurrency, when using shared memory, mutual exclusion must be considered

pthread_mutex_*#

[Mutex] Operation mutex

  • man pthread_mutex_init, etc.
  • Image
  • lock: When using lock, it checks if it is already locked, if locked, it will suspend until unlocked [possible blocking point]
  • init: Dynamic initialization method, requires the use of attribute variable
  • Attribute interface: pthread_mutexattr_*
    • init: Initialization
      • Image
      • In the initialization function, attr is an output parameter, return value is 0
    • setpshared: Set inter-process sharing
      • Image
      • pshared variable is of int type, which is actually a flag
      • Image
      • 0: Private between processes; 1: Shared between processes → Corresponds to macros PTHREAD_PROCESS_PRIVATE, PTHREAD_PROCESS_SHARED
  • Basic operations of mutex: Create attribute variable 👉 Initialize mutex 👉 Lock/Unlock operations, see code demonstration——Shared Memory [Mutex]

pthread_cond_*#

[Condition Variable] Control conditions

  • man pthread_cond_init, etc.
  • Image
  • Structure is very similar to mutex
  • Image
  • Condition variables are a synchronization device that allows threads to suspend and release processor resources until a certain condition is met
  • Basic operations: Send condition signal, wait for condition signal
  • ⭐ Must be associated with a mutex to avoid race conditions: It is possible that one thread is ready to wait for a condition before another thread has already sent a signal, leading to missed messages
  • init: Initialization can use attribute variables [but in fact, in Linux thread implementation, attribute variables are ignored]
  • Image
  • signal: Will only wake up 1 thread that is waiting; if there are no waiting threads, nothing happens
  • broadcast: Will wake up all waiting threads
  • wait
    • ① Unlock mutex and wait for condition variable to activate [True atomic operation]
    • ② Before wait returns [before the thread is restarted / after receiving the signal], the mutex will be locked again
    • [PS]
      • When calling wait, the mutex must be in a locked state
      • While waiting, it will not consume CPU
  • ❗ The section in the yellow box in the image
    • Ensures that during the wait unlock mutex -> prepare to wait period [atomic operation], other threads will not send signals
    • ❓ At this time, the mutex has been unlocked, so this atomic operation may still need to use something similar to a mutex
    • ❓ It seems that wait can prepare to wait first -> then unlock the mutex, which is normal logic, but unlocking first and then performing an atomic operation, what is the significance of unlocking first?

Code Demonstration#

Shared Memory [Without Locking]#

5 processes perform a cumulative sum from 1 to 10000

  • Image
  • Image
  • ❗ Here the child directly uses the shared memory address share_memory inherited from the parent process [virtual address]
    • It can be seen that the same virtual address in different processes points to the same shared memory [physical memory]
    • If the child obtains the shared address through the inherited shmat and id, it will correspond to a new virtual address in the child process, but still point to the same shared memory, and the originally inherited shared address share_memory can also be used
  • ❗ How to achieve true destruction after using shared memory
    • Use shmdt to detach the shared memory attached in all processes [experimentally, this step can be omitted because the process will automatically detach upon termination]
    • Then use shmctl with IPC_RMID to destroy the shared memory segment [remove shmid]
    • [PS]
      • Each process counts as one attachment, and when the attachment count is 0, shmctl will truly destroy the memory segment
      • You can also remove the IPC_EXCL flag from shmget to not check if it already exists, but this is not a fundamental solution
  • If the above shmctl operation is not performed
    • The first execution is fine, indicating that the shared memory segment was successfully created
    • But the second execution shows that the file already exists
      • Image
      • This indicates that the shared memory was not automatically destroyed after the last execution, and the second execution still uses the same key to create shared memory [each time the same key corresponds to a different shmid]
    • [Exploration Process]
      • ipcs: Display IPC related resource information
        • Image
        • You can see the undestroyed shared memory, as well as message queues and semaphores
        • And their keys, ids, permission perms
        • You can also see that their nattch is 0, indicating that there are no attachments
      • ipcrm: Delete IPC resources
        • Image
        • The usage of parameters can be viewed through --help
        • Resources can be removed by key or id
        • Manually delete resources and then run the program: ipcs | grep [owner] | awk '{print $2}' | xargs ipcrm -m && ./a.out
  • ❗ The essence of the following error [personal understanding]
    • Image
    • The correct cumulative sum from 1 to 10000 should be 50005000
    • Multiple processes simultaneously operating on shared memory [or] a read/write operation of one process is not fully completed before another process starts a new operation. For example:
      • One process just increments now++, writes to memory, at this time another process gets the new now and increments it again, then adds the now incremented twice to the old sum
      • That is, the operations now++ and sum accumulation are not atomic operations
    • However, generally, the CPU speed is too fast, so it is very likely that the operations are atomic [now++ and sum accumulation writing to memory], with a very small probability of error
    • Multi-core is more likely to produce calculation errors because a single processor can only run one process at a time
  • [PS]
    • When setting permissions, the first 0 in 0600 indicates the use of octal
    • usleep(100): Makes the process sleep for 100ms, allowing one process to compute once and block, letting other processes continue, purely to reflect the division of labor
    • Header files should be added according to the man manual

Shared Memory [Mutex]#

A more efficient lock in memory, similar to file locks [Refer to Example] idea

  • Image
  • Image
  • Create mutex: Create attribute variable 👉 Initialize mutex
  • Two conditions for using mutex between processes
    • ① The mutex variable is placed in shared memory, shared with each process, thus controlling each process
    • ② When creating the mutex in the parent process, set the created attribute variable for process sharing [default only works between threads]
      • However, some kernels may not require this operation, but for compatibility, it is recommended to set it ❗
  • [PS]
    • After calculating the cumulative sum, remember to unlock
    • The slowest operation in the system is IO operation, so unlocking before printf allows the mutex to be released earlier, which will be more efficient
    • fflush can manually flush the buffer to avoid output confusion

Shared Memory [Condition Variable]#

Each process takes turns to calculate 100 times

  • Image
  • Image
  • Image
  • Initialization of cond condition variable, similar to mutex
    • In the implementation of Linux threads, actually does not require attr attribute variable; same for mutex
  • For single-core machines, before sending the condition signal each time, you need to usleep for a while to ensure that the child process is ready to wait for the signal. Otherwise,
    • For the parent process, it executes first, sends the signal, and if it hasn't reached the turn for the child process to run, it will miss the signal
    • For child processes, it is similar; before sending the signal, let other child processes run to the wait state first [in fact, usleep cannot guarantee this]
  • ❗ Note
    • There must be a locked mutex before wait
    • Remember to unlock + send signal operation after each operation [100 calculations / completion of 10000 accumulations]
    • There are two sequences for sending signal operations:
      • ① Locking — Sending signal — Unlocking
        • Disadvantage: The waiting thread is awakened from the kernel [→ user state], but because there is no mutex available to lock, unfortunately, it returns [from user state] to kernel space until there is a mutex available to lock
          • Two context switches [between kernel state and user state] waste performance.
        • Advantage:
          • Ensures thread priority [❓]
          • And in the Linux thread implementation, there are cond_wait queues and mutex_lock queues, so it will not return to user state, thus avoiding performance loss
      • ② Locking — Unlocking — Sending signal [This article adopts]
        • Advantage: Ensures that threads in the cond_wait queue have a mutex available to lock
        • Disadvantage: A low-priority thread that grabs the mutex may execute first
      • Reference: Condition Variable pthread_cond_signal, pthread_cond_wait——CSDN
  • Implementation Effect
    • Image
    • Each process calculates 100 times in turn
    • [PS] Even usleep cannot completely avoid message omission, a variable in shared memory can be used to record the sending of signals; additionally, there may also be cases of false wake-up
if (!count) {                   // ->Message omission [not yet in wait state]
  pthread_mutex_lock(&lock);
  while (condition_is_false) {  // ->False wake-up [waking multiple threads simultaneously]
    pthread_cond_wait(&cond, &lock);
  }
  //...
  pthread_mutex_unlock(&lock);
}

Simple Chat Room#

  • chat.h
    • Image
    • Username + message + standard for using shared memory
  • 1.server.c
    • Image
    • The logic is basically similar to the previous code demonstration; note the clearing operation at the end, mainly for data safety
  • 2.client.c
    • Image
    • Image
    • Pay attention to the role of the while loop on line 41: Prevent the client from grabbing the lock, causing the server to not receive the signal
      • There is also a hidden danger: A client may not be able to grab the lock, causing blocking
    • [Note] When compiling 2.client.c with gcc, be sure to add -lpthread; otherwise, there will be no error, but at runtime, the server will not receive the signal
  • Effect Demonstration
    • Image
    • Left is the server, right are two clients

Additional Knowledge Points#

  • Related processes: between parent and child, between siblings

Points to Consider#

  • Processes competing to calculate VS. Each process calculates one hundred times, which method is more efficient?
    • The latter is more efficient; thinking to the extreme, a single process calculating everything is more efficient
    • This involves the throughput issue of the CPU, this accumulation is CPU-intensive, not IO-intensive

Tips#

  • Command to view IPC related resource information: ipcs
  • For user threads, if one thread in a process crashes, all threads in the entire process will crash
  • Movie recommendation before class: "Dances with Wolves" 1990

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.