CUDA教程8 -- 预处理方法

§预处理子

使用迭代法求线性方程组时,预处理可以有效减少迭代步数。常用的预处理子有Jacobi、ILU(Incomplete LU)、IC(Incomplete Cholesky),以及OpenFOAM使用的DILU(Diagonal-based Incomplete LU)和DIC(Diagonal-based Incomplete Cholesky)等。以DILU为例,其构造方法为令预处理子 $$ M=(E+L)E^{-1}(E+U), $$ 其中L和U分别为系数矩阵A的严格下三角、严格上三角部分,而E是一个对角矩阵,且满足: $$ diag(A) = diag(M) = diag((E+L)E^{-1}(E+U)) = diag(E+LE^{-1}U). $$

以一个三阶矩阵为例,有

$$ \begin{aligned} E &= \begin{bmatrix} a_{11} \\ & a_{22} \\ & & a_{33} \\ \end{bmatrix} - diag\left\{ \begin{bmatrix} 0 \\ a_{21} & 0 \\ a_{31} & a_{32} & 0 \\ \end{bmatrix} \begin{bmatrix} E_{11}^{-1} \\ & E_{22}^{-1} \\ & & E_{33}^{-1} \\ \end{bmatrix} \begin{bmatrix} 0 & a_{12} & a_{13} \\ & 0 & a_{23} \\ & & 0 & \\ \end{bmatrix} \right\} \\ &= \begin{bmatrix} a_{11} \\ & a_{22} - a_{21}E_{11}^{-1}a_{12} \\ & & a_{33} - a_{31}E_{11}^{-1}a_{13} - a_{32}E_{22}^{-1}a_{23} \end{bmatrix}. \end{aligned} $$

因此E的各个元素可以逐个计算,算法描述如下:

DILU

在使用DILU预处理子进行预处理,就是对输入向量v,计算 $$ y=M^{-1}v. $$ 如果先计算并存储$M^{-1}$,然后按矩阵向量乘法计算,这样做的问题是$M^{-1}$的稀疏性可能远远不如M本身,导致存储和计算的成本过高,因此普遍的做法是求解两个三角方程组:

$$ My=v \Rightarrow (E+L)E^{-1}(E+U)y=v \Rightarrow (I + E^{-1}L)(I+E^{-1}U)y=E^{-1}v $$
过程为先求解下三角方程 $$ (I + E^{-1}L)y_1=E^{-1}v, $$ 然后求解上三角方程 $$ (I+E^{-1}U)y=y_1. $$ 下三角方程和上三角分别使用前代法和回代法求解,前代法的过程大致为:

forward_lower_solver

回代法类似,不再赘述。

§并行三角方程求解

本节以下三角方程为例,上三角方程类似。

§多重着色算法

在预处理过程中,最重要的过程是三角方程求解,然而传统的前代法和回代法都是串行的,为了解决这个问题,我们使用多重着色算法来发掘并行性。

每个下三角矩阵都对应一个有向无环图,依次选择并删除图中的入度为0的节点,可以得到图的拓扑排序,比如矩阵

$$ L=\begin{bmatrix} l_{11} \\ & l_{22} \\ & & l_{33} \\ l_{41} & & & l_{44} \\ l_{51} & & & & l_{55} \\ & l_{62} & & & & l_{66} \\ & & l_{73} & & & & l_{77} \\ & & & l_{84} & l_{85} & & & l_{88} \\ & & & & l_{94} & l_{95} & & & l_{99} \end{bmatrix} $$
对应的有向无环图及拓扑排序结果为:

dag

那么求解$Lx=b$的时候,不需要严格按照$x_1$到$x_9$的顺序求解,可以先求$x_1,x_2,x_3$,再求$x_4,x,_5,x_6,x_7$,最后求$x_8,x_9$,并且每层内部都可以同时求解。

对有向无环图进行拓扑排序的经典方法先求解每个节点的入度,然后每轮选出入度为0的节点作为最大独立集,独立集中的所有节点为一层,然后删除这些节点并更新入度。代码大致为:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
// 计算入度
std::vector<int> in_degree(N, 0);
for (int i = 0; i < N; ++i) {
    for (int j = 0; j < i; ++j) {
        if (L(i, j) != 0)
            ++in_degree[i];
    }
}

// 计算每个节点所在的层数,或者称为颜色
std::vector<int> colors(N);
std::queue<int> q;   // 辅助队列
for (int i = 0; i < N; ++i) {  // 第0层
    if (in_degree[i] == 0)
        q.push(i);
}

int level = 0;
while (!q.empty()) {
    int size = q.size();
    while (size-- > 0) {
        int t = q.front();
        q.pop();
        
        colors[t] = level;
        
        for (int j = t + 1; j < N; ++j) {
            if (L(j, i) != 0) {
                --in_degree[j];
                if (in_degree[j] == 0)
                    q.push(j);
            }
        }
    }
    ++level;
}

以矩阵L为例,计算的结果为:

colors = {0, 0, 0, 1, 1, 1, 1, 2, 2};

如果考虑并行算法,一是如果使用多线程,可以通过并发队列来实现,虽然数据争用比较多,逻辑比较复杂,但是是可以很好的实现的。如果是GPU并行就不行了,因为很难高效地实现并发队列,这里给出一个可以使用GPU并行的算法,它只需要硬件保证的弱一致性即可正常工作:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
template<int BlockSize = BLOCK_SIZE>
__global__ void ColorLowerMatrixKernel(
        IndexType *colors,
        IndexType *__restrict__ counts,
        const IndexType curr_color,
        const IndexType *__restrict__ row_offsets,
        const IndexType *__restrict__ column_indices,
        const IndexType num_rows
) {
    __shared__ int s_counts[BlockSize];  // 用于归约计算着色了多少行
    int t = 0;   // 本线程着色了多少行

    IndexType row = threadIdx.x + blockIdx.x * blockDim.x;
    const IndexType stride = blockDim.x * gridDim.x;
    for (; row < num_rows; row += stride) {
        // 已着色
        if (colors[row] != -1)
            continue;

        bool can_be_colored = true;
        for (IndexType k = row_offsets[row]; k < row_offsets[row + 1]; ++k) {
            IndexType col = column_indices[k];

            // 只处理严格下三角
            if (col >= row)
                break;

            IndexType jc = colors[col];

            // 跳过已经着色的节点(不包含本轮着色的节点)
            // 如果有一个邻居没有着色则它仍然有依赖未满足
            if (jc == -1 || jc == curr_color) {
                can_be_colored = false;
                break;
            }
        }

        // 无依赖,可以着色
        if (can_be_colored) {
            colors[row] = curr_color;
            ++t;
        }
    }

    s_counts[threadIdx.x] = t;
    __syncthreads();

    if (BlockSize >= 1024 && threadIdx.x < 512)
        s_counts[threadIdx.x] += s_counts[threadIdx.x + 512];
    __syncthreads();

    if (BlockSize >= 512 && threadIdx.x < 256)
        s_counts[threadIdx.x] += s_counts[threadIdx.x + 256];
    __syncthreads();

    if (BlockSize >= 256 && threadIdx.x < 128)
        s_counts[threadIdx.x] += s_counts[threadIdx.x + 128];
    __syncthreads();

    if (BlockSize >= 128 && threadIdx.x < 64)
        s_counts[threadIdx.x] += s_counts[threadIdx.x + 64];
    __syncthreads();

    if (threadIdx.x < 32) {
        volatile int *v_counts = s_counts;
        v_counts[threadIdx.x] += v_counts[threadIdx.x + 32];
        v_counts[threadIdx.x] += v_counts[threadIdx.x + 16];
        v_counts[threadIdx.x] += v_counts[threadIdx.x + 8];
        v_counts[threadIdx.x] += v_counts[threadIdx.x + 4];
        v_counts[threadIdx.x] += v_counts[threadIdx.x + 2];
        v_counts[threadIdx.x] += v_counts[threadIdx.x + 1];
    }

    if (threadIdx.x == 0)
        counts[blockIdx.x] = s_counts[0];
}

class LowerCsrMatrixColor {
public:
    // 如果lower_mat不是下三角矩阵则取其下三角部分进行着色
    explicit LowerCsrMatrixColor(const CsrMatrix &lower_mat);

    LowerCsrMatrixColor() {}
    LowerCsrMatrixColor(const LowerCsrMatrixColor &other);
    LowerCsrMatrixColor(LowerCsrMatrixColor &&other);
    LowerCsrMatrixColor &operator=(const LowerCsrMatrixColor &other);
    LowerCsrMatrixColor &operator=(LowerCsrMatrixColor &&other);
    ~LowerCsrMatrixColor();

    // 颜色数组的指针
    IndexType *data() { return _colors; }
    const IndexType *data() const { return _colors; }

    // 颜色数组的长度,即矩阵的行数
    IndexType size() const { return _size; }

    // 总共用了多少种颜色,颜色的范围为[0, _num_colors)
    IndexType num_colors() const { return _num_colors; }
    
private:
    // 矩阵各行的颜色,长度和矩阵行数相等
    IndexType *_colors = nullptr;

    // 长度,和矩阵行数相等
    IndexType _size = 0;

    // 颜色总数
    IndexType _num_colors = 0;
};

LowerCsrMatrixColor::LowerCsrMatrixColor(const CsrMatrix &lower_mat) {
    _size = lower_mat.num_rows;
    CHECK(cudaMalloc(&_colors, _size * sizeof(IndexType)));

    int max_grid = 0;
    int block = 0;
    LaunchConfig(&max_grid, &block);

    int grid = (_size + block - 1) / block;
    if (grid > max_grid)
        grid = max_grid;

    // 填充为-1表示所有节点都未着色
    FillIndexTypeKernel<<<grid, block>>>(_colors, _size, -1);

    // 用于归约计算有多少个节点已经着色了
    IndexType *counts = nullptr;
    CHECK(cudaMalloc(&counts, grid * sizeof(IndexType)));

    std::vector<IndexType> h_counts(grid);

    IndexType num_colored_rows = 0;
    IndexType curr_color = 0;
    
    // curr_color为当前颜色,或者称为当前层数
    for (curr_color = 0; curr_color < _size; ++curr_color) {
        // 着色,并对着色的节点个数进行归约
        ColorLowerMatrixKernel<><<<grid, block>>>(
                _colors, counts, curr_color,
                lower_mat.row_offsets, lower_mat.column_indices, _size);

        // 将归约的结果传输到CPU上求和
        CHECK(cudaMemcpy(h_counts.data(), counts, grid * sizeof(IndexType),
                         cudaMemcpyDeviceToHost));
        for (IndexType n : h_counts)
            num_colored_rows += n;

        // 所有节点都已着色时退出算法
        if (num_colored_rows == _size)
            break;
    }

    CHECK(cudaFree(counts));

    _num_colors = curr_color + 1;
}

算法的输出结果与前述串行算法完全一致。

§使用着色结果求解三角方程

着色算法会输出各个节点的颜色,按照颜色计算每层的未知数,层内可以并行,算法大致为:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
// 求解颜色为curr_colr的未知数
__global__ void LowerSolverKernel(
        ValueType *x,
        const ValueType *__restrict__ lower_values,
        const IndexType *__restrict__ lower_row_offsets,
        const IndexType *__restrict__ lower_column_indices,
        const IndexType *__restrict__ colors,
    	const ValueType *__restrict__ b,
        const IndexType num_rows,
        const IndexType curr_color
) {
    IndexType row = threadIdx.x + blockIdx.x * blockDim.x;
    const IndexType stride = blockDim.x * gridDim.x;
    for (; row < num_rows; row += stride) {
        if (colors[row] == curr_color) {
            ValueType t = 0;

            IndexType end = lower_row_offsets[row + 1] - 1;
            IndexType j = lower_row_offsets[row];
            for (; j < end; ++j)
                t += lower_values[j] * x[lower_column_indices[j]];

            x[row] = (b[row] - t) / lower_values[j];
        }
    }
}

// 求解Lx=b,每次解一层
CsrMatrix L;
Vector b;

Vector x(N);
LowerCsrMatrixColor lower_color(L);
for (IndexType color = 0; color < lower_colors.num_colors(); ++color) {
    LaunchKernelWithNumThreads(
            LowerSolverKernel, N, 0, 0,
            x.data(),
            lower.values, lower.row_offsets, lower.column_indices,
            lower_colors.data(), b.data()
            N, color
    );
}

§改进

在上述求解算法中,每次计算一层的未知数,但仍然遍历了整个colors数组,以寻找可以求解的未知数,这一点是可以改进的,如果每次只遍历需要计算的未知数,可以有效降低GPU的负载,平衡线程的负载。

为了知道每层有哪些未知数,可以参考CSR格式的row_offsets的思路,我们将行号按照颜色排序得到reordered_rows,然后计算一个color_offsets表示每种颜色在reordered_rows中的起始下标,代码大致为:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
class LowerCsrMatrixColor1 {
public:
    LowerCsrMatrixColor1() {}

    // 如果lower_mat不是下三角矩阵则取其下三角部分进行着色
    explicit LowerCsrMatrixColor1(const CsrMatrix &lower_mat);
    LowerCsrMatrixColor1(const LowerCsrMatrixColor1 &other);
    LowerCsrMatrixColor1(LowerCsrMatrixColor1 &&other);
    LowerCsrMatrixColor1 &operator=(const LowerCsrMatrixColor1 &other);
    LowerCsrMatrixColor1 &operator=(LowerCsrMatrixColor1 &&other);
    ~LowerCsrMatrixColor1();

    // 颜色数组的指针
    IndexType *reordered_rows(IndexType color) {
        return _reordered_rows + _color_offsets[color];
    }

    const IndexType *reordered_rows(IndexType color) const {
        return _reordered_rows + _color_offsets[color];
    }

    IndexType color_offsets(IndexType color) const {
        return _color_offsets[color];
    }

    // 矩阵的行数
    IndexType size() const { return _size; }

    // 总共用了多少种颜色,颜色的范围为[0, _num_colors)
    IndexType num_colors() const { return _num_colors; }

private:
    // 按颜色从小到大的行号
    IndexType *_reordered_rows = nullptr;

    // 各个颜色在_reorder_rows中的起始偏移,长度为_num_colors+1
    std::vector<IndexType> _color_offsets;

    // 长度,和矩阵行数相等
    IndexType _size = 0;

    // 颜色总数
    IndexType _num_colors = 0;
};

LowerCsrMatrixColor1::LowerCsrMatrixColor1(const CsrMatrix &lower_mat) {
    _size = lower_mat.num_rows;

    // 先计算各行的颜色,第i行的颜色保存在colors[i]中
    IndexType *colors = nullptr;
    CHECK(cudaMalloc(&colors, _size * sizeof(IndexType)));

    int max_grid = 0;
    int block = 0;
    LaunchConfig(&max_grid, &block);

    int grid = (_size + block - 1) / block;
    if (grid > max_grid)
        grid = max_grid;

    // 填充为-1表示所有节点都未着色
    FillIndexTypeKernel<<<grid, block>>>(colors, _size, -1);

    // 用于归约计算有多少个节点已经着色了
    IndexType *counts = nullptr;
    CHECK(cudaMalloc(&counts, grid * sizeof(IndexType)));

    std::vector<IndexType> h_counts(grid);

    IndexType num_colored_rows = 0;
    IndexType curr_color = 0;
    
    // curr_color为当前颜色,或者称为当前层数
    for (curr_color = 0; curr_color < _size; ++curr_color) {
        // 着色,并对着色的节点个数进行归约
        ColorLowerMatrixKernel<><<<grid, block>>>(
                colors, counts, curr_color,
                lower_mat.row_offsets, lower_mat.column_indices, _size);

        // 将归约的结果传输到CPU上求和
        CHECK(cudaMemcpy(h_counts.data(), counts, grid * sizeof(IndexType),
                         cudaMemcpyDeviceToHost));
        for (IndexType n : h_counts)
            num_colored_rows += n;

        // 所有节点都已着色时退出算法
        if (num_colored_rows == _size)
            break;
    }

    CHECK(cudaFree(counts));

    _num_colors = curr_color + 1;

    // 根据颜色将行号排序
    CHECK(cudaMalloc(&_reordered_rows, _size * sizeof(IndexType)));
    LaunchKernelWithNumThreads(
            SequenceKernel, _size, 0, 0,
            _reordered_rows, _size
    );

    thrust::stable_sort_by_key(
            thrust::device_pointer_cast(colors),
            thrust::device_pointer_cast(colors + _size),
            thrust::device_pointer_cast(_reordered_rows)
    );

    // 计算每种颜色的起始偏移
    IndexType *d_color_offsets = nullptr;
    CHECK(cudaMalloc(&d_color_offsets, (_num_colors + 1) * sizeof(IndexType)));
    LaunchKernelWithNumThreads(
            Coo2CsrKernel, _size, 0, 0,
            d_color_offsets, colors, _num_colors, _size
    );

    _color_offsets.resize(_num_colors + 1);
    CHECK(cudaMemcpy(_color_offsets.data(), d_color_offsets,
                     (_num_colors + 1) * sizeof(IndexType), cudaMemcpyDeviceToHost));

    CHECK(cudaFree(d_color_offsets));
}

根据这样的着色结果求解下三角方程的代码大致为:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
__global__ void LowerSolverKernel1(
        ValueType *x,
        const ValueType *__restrict__ lower_values,
        const IndexType *__restrict__ lower_row_offsets,
        const IndexType *__restrict__ lower_column_indices,
        const IndexType *__restrict__ reordered_rows,
    	const ValueType *__restrict__ b,
        const IndexType num_rows
) {
    IndexType i = threadIdx.x + blockIdx.x * blockDim.x;
    const IndexType stride = blockDim.x * gridDim.x;
    for (; i < num_rows; i += stride) {
        IndexType row = reordered_rows[i];

        ValueType t = 0;

        IndexType end = lower_row_offsets[row + 1] - 1;
        IndexType j = lower_row_offsets[row];
        for (; j < end; ++j)
            t += lower_values[j] * x[lower_column_indices[j]];

        x[row] = (b[row] - t) / lower_values[j];
    }
}

// 求解Lx=b,每次解一层
CsrMatrix L;
Vector b;

Vector x(N);
LowerCsrMatrixColor1 lower_color(L);
for (IndexType color = 0; color < lower_colors.num_colors(); ++color) {
    // 本颜色有多少行,亦即有多少个未知数
    IndexType num_rows = lower_colors.color_offsets(color + 1)
                         - lower_colors.color_offsets(color);
    
    LaunchKernelWithNumThreads(
            LowerSolverKernel1, num_rows, 0, 0,
            x.data(),
            L.values, L.row_offsets, L.column_indices,
            lower_colors.reordered_rows(color), b.data(),
            num_rows
    );
}

§并行DILU预处理

DILU预处理不仅涉及求解三角方程,还涉及如何构造预处理子。构造过程主要分为两步:1、取出系数矩阵的严格上三角和严格下三角部分;2、计算E。其中的主要难点在如何计算E。

首先取出矩阵A的严格下三角矩阵L和严格上三角矩阵U,可以先遍历一次计算出L和U各行的非零元素个数,然后扫描生成row_offsets,再然后分配L和U的内存,最后再遍历一次A,将元素填充到L和U中,代码大致为:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
// 计算L和U的各行非零元素个数
__global__ void GetSplitMatrixSizeKernel(
        IndexType *__restrict__ lower_row_size,
        IndexType *__restrict__ upper_row_size,
        const IndexType *__restrict__ row_offsets,
        const IndexType *__restrict__ column_indices,
        const IndexType num_rows
) {
    IndexType row = threadIdx.x + blockIdx.x * blockDim.x;
    IndexType stride = blockDim.x * gridDim.x;
    for (; row < num_rows; row += stride) {
        IndexType start = row_offsets[row];
        IndexType end = row_offsets[row + 1];

        // 跳过严格下三角,此后j指向对角元素
        IndexType j = start;
        while (column_indices[j] < row)
            ++j;

        // 下三角
        lower_row_size[row] = j - start + 1;

        // 上三角
        upper_row_size[row] = end - j;
    }
}

// 填充L和U
__global__ void SplitMatrixKernel(
        ValueType *__restrict__ lower_values,
        IndexType *__restrict__ lower_column_indices,
        ValueType *__restrict__ upper_values,
        IndexType *__restrict__ upper_column_indices,
        ValueType *__restrict__ diag,
        const IndexType *__restrict__ lower_row_offsets,
        const IndexType *__restrict__ upper_row_offsets,
        const ValueType *__restrict__ values,
        const IndexType *__restrict__ row_offsets,
        const IndexType *__restrict__ column_indices,
        const IndexType num_rows
) {
    IndexType row = threadIdx.x + blockIdx.x * blockDim.x;
    IndexType stride = blockDim.x * gridDim.x;
    for (; row < num_rows; row += stride) {
        // 下三角,对角元素的位置填入1
        IndexType n = lower_row_offsets[row];
        IndexType j = row_offsets[row];
        while (column_indices[j] < row) {
            lower_values[n] = values[j];
            lower_column_indices[n] = column_indices[j];
            ++n;
            ++j;
        }
        lower_values[n] = 1;
        lower_column_indices[n] = row;

        // 对角
        if (column_indices[j] == row)
            diag[row] = values[j++];

        // 上三角,对角元素的位置填入1
        n = upper_row_offsets[row];
        upper_values[n] = 1;
        upper_column_indices[n] = row;
        ++n;
        while (j < row_offsets[row + 1]) {
            upper_values[n] = values[j];
            upper_column_indices[n] = column_indices[j];
            ++n;
            ++j;
        }
    }
}

// 接口继承的基类
class Preconditioner {
public:
    virtual void precondition(Vector *out, const Vector &in) const = 0;
    virtual ~Preconditioner() {}
};

// 实现precondition
class DILUPreconditioner : public Preconditioner {
public:
    DILUPreconditioner(const CsrMatrix &A);

    void precondition(Vector *out, const Vector &in) const override;

    ~DILUPreconditioner() override = default;

private:
    Vector invE;
    CsrMatrix lower;   // I + invE * L
    CsrMatrix upper;   // I + invE * U

    LowerCsrMatrixColor1 lower_colors;
    UpperCsrMatrixColor1 upper_colors;
};

DILUPreconditioner::DILUPreconditioner(const CsrMatrix &A) {
    // 取出下三角和上三角
    invE.resize(A.num_rows);
    lower.resize(A.num_rows, A.num_cols);
    upper.resize(A.num_rows, A.num_cols);

    LaunchKernelWithNumThreads(
            GetSplitMatrixSizeKernel, A.num_rows, 0, 0,
            lower.row_offsets,
            upper.row_offsets,
            A.row_offsets,
            A.column_indices,
            A.num_rows
    );

    thrust::exclusive_scan(
            thrust::device_ptr<IndexType>(lower.row_offsets),
            thrust::device_ptr<IndexType>(lower.row_offsets + A.num_rows + 1),
            thrust::device_ptr<IndexType>(lower.row_offsets)
    );

    thrust::exclusive_scan(
            thrust::device_ptr<IndexType>(upper.row_offsets),
            thrust::device_ptr<IndexType>(upper.row_offsets + A.num_rows + 1),
            thrust::device_ptr<IndexType>(upper.row_offsets)
    );

    // 分配空间
    CHECK(cudaMemcpy(&lower.num_entries, lower.row_offsets + lower.num_rows,
                     sizeof(IndexType), cudaMemcpyDeviceToHost));
    CHECK(cudaMemcpy(&upper.num_entries, upper.row_offsets + upper.num_rows,
                     sizeof(IndexType), cudaMemcpyDeviceToHost));

    CHECK(cudaMalloc(&lower.values, lower.num_entries * sizeof(ValueType)));
    CHECK(cudaMalloc(&lower.column_indices, lower.num_entries * sizeof(IndexType)));

    CHECK(cudaMalloc(&upper.values, upper.num_entries * sizeof(ValueType)));
    CHECK(cudaMalloc(&upper.column_indices, upper.num_entries * sizeof(IndexType)));


    // 取出
    LaunchKernelWithNumThreads(
            SplitMatrixKernel, A.num_rows, 0, 0,
            lower.values, lower.column_indices,
            upper.values, upper.column_indices,
            invE.data(),
            lower.row_offsets, upper.row_offsets,
            A.values, A.row_offsets, A.column_indices, A.num_rows
    );
    
    // 着色
    lower_colors = LowerCsrMatrixColor1(lower);
    upper_colors = UpperCsrMatrixColor1(upper);
    
    // ...
}

这里的DILU的成员不是直接保存的D、L和U,而是$I + E^{-1}L,I+E^{-1}U,E^{-1}$,计算过程参考第一节。

那么接下来计算$E^{-1}$,其过程类似于求解三角方程组,可以对三角方程求解算法加以改造用来计算$E^{-1}$。简单起见,这里假定A可以不对称,但它的结构是对称的,即L(i,j)!=0,则U(j,i)!=0,反之亦然。代码大致为:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
__global__ void ComputeInvEKernel(
        ValueType *invE,
        const ValueType *__restrict__ lower_values,
        const ValueType *__restrict__ upperT_values,
        const IndexType *__restrict__ row_offsets,
        const IndexType *__restrict__ column_indices,
        const IndexType *__restrict__ reordered_rows,
        const IndexType num_rows
) {
    IndexType i = threadIdx.x + blockIdx.x * blockDim.x;
    const IndexType stride = blockDim.x * gridDim.x;

    for (; i < num_rows; i += stride) {
        IndexType row = reordered_rows[i];

        ValueType t = 0;

        IndexType j = row_offsets[row];
        for (; column_indices[j] < row; ++j)
            t += lower_values[j] * upperT_values[j] * invE[column_indices[j]];

        invE[row] = 1.0 / (invE[row] - t);
    }
}

// 计算DILU中的I + invE * L。lower的对角线已经设置为1了,计算时跳过。
__global__ void ComputeInvELowerKernel(
        ValueType *__restrict__ lower,
        const ValueType *__restrict__ invE,
        const IndexType *__restrict__ lower_row_offsets,
        const IndexType num_rows
) {
    unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
    const IndexType stride = blockDim.x * gridDim.x / 16;

    for (IndexType row = tid / 16; row < num_rows; row += stride) {
        const IndexType end = lower_row_offsets[row + 1] - 1;  // 对角元素不处理
        IndexType j = lower_row_offsets[row] + tid % 16;
        for (; j < end; j += 16)
            lower[j] *= invE[row];
    }
}

// 计算DILU中的I + invE * U。upper的对角线已经设置为1了,计算时跳过。
__global__ void ComputeInvEUpperKernel(
        ValueType *__restrict__ upper,
        const ValueType *__restrict__ invE,
        const IndexType *__restrict__ upper_row_offsets,
        const IndexType num_rows
) {
    unsigned int tid = threadIdx.x + blockIdx.x * blockDim.x;
    const IndexType stride = blockDim.x * gridDim.x / 16;

    for (IndexType row = tid / 16; row < num_rows; row += stride) {
        const IndexType end = upper_row_offsets[row + 1];
        IndexType j = upper_row_offsets[row] + 1 + tid % 16;   // 对角元素不处理
        for (; j < end; j += 16)
            upper[j] *= invE[row];
    }
}

DILUPreconditioner::DILUPreconditioner(const CsrMatrix &A) {
    // ...
    
    // 计算invE,先将upper转置
    CsrMatrix upperT = upper.transpose();
    for (IndexType color = 0; color < lower_colors.num_colors(); ++color) {
        IndexType num_rows = lower_colors.color_offsets(color + 1)
                             - lower_colors.color_offsets(color);

        LaunchKernelWithNumThreads(
                ComputeInvEKernel, num_rows, 0, 0,
                invE.data(),
                lower.values, upperT.values,
                lower.row_offsets, lower.column_indices,
                lower_colors.reordered_rows(color), num_rows
        );
    }


    // 计算I + invE * L和I + invE * U。lower和upper的对角已经设置为1了,乘的时候
    // 跳过即可。
    LaunchKernelWithNumThreads(
            ComputeInvELowerKernel, lower.num_rows * 16ull, 0, 0,
            lower.values, invE.data(), lower.row_offsets, lower.num_rows
    );

    LaunchKernelWithNumThreads(
            ComputeInvEUpperKernel, upper.num_rows * 16ull, 0, 0,
            upper.values, invE.data(), upper.row_offsets, upper.num_rows
    );
}

相应地,预处理过程的代码大致为:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
__global__ void HadamardKernel(
        ValueType *result,
        const ValueType *v1,
        const ValueType *v2,
        const IndexType N
) {
    const int stride = blockDim.x * gridDim.x;
    for (IndexType i = blockIdx.x * blockDim.x + threadIdx.x; i < N; i += stride)
        result[i] = v1[i] * v2[i];
}

// 解(E + L) E^-1 (E + U) y = x,即(I + E^-1 L) (I + E^-1 U) y = E^-1 x。
void DILUPreconditioner::precondition(Vector *out, const Vector &in) const {
    if (invE.size != in.size)
        throw std::runtime_error{"precondition error: A.num_rows!=v.size"};

    out->resize(in.size);

    // out = invE * x
    LaunchKernelWithNumThreads(
            HadamardKernel, in.size, 0, 0,
            out->data(), invE.data(), in.data(), in.size
    );

    // (I + E^-1 L) t = E^-1 x
    for (IndexType color = 0; color < lower_colors.num_colors(); ++color) {
        IndexType num_rows = lower_colors.color_offsets(color + 1)
                             - lower_colors.color_offsets(color);

        LaunchKernelWithNumThreads(
                LowerSolverKernel1, num_rows, 0, 0,
                out->data(),
                lower.values, lower.row_offsets, lower.column_indices,
                lower_colors.reordered_rows(color), out->data(),
                num_rows
        );
    }

    // (I + E^-1 U) y = t
    for (IndexType color = 0; color < upper_colors.num_colors(); ++color) {
        IndexType num_rows = upper_colors.color_offsets(color + 1)
                             - upper_colors.color_offsets(color);

        LaunchKernelWithNumThreads(
                UpperSolverKernel1, num_rows, 0, 0,
                out->data(),
                upper.values, upper.row_offsets, upper.column_indices,
                upper_colors.reordered_rows(color), out->data(),
                num_rows
        );
    }
}

§预处理共轭梯度法

预处理共轭梯度法的算法为:

$$ \begin{aligned} & r_0=b-Ax^{(0)} \\ & z_0=P^{-1}r_0 \\ & p_1=z_0 \\ & \rho_0 = r_0^Tz_0 \\ & for \ k=1,2,\dots,n \\ & \qquad q_k=Ap_k \\ & \qquad \xi_k=\rho_{k-1}/(p_k^Tq_k) \\ & \qquad x^{(k)} = x^{(k-1)}+\xi_k p_k \\ & \qquad r_k=r_{k-1}-\xi_kq_k \\ & \qquad if\ \|r_k\| < error \\ & \qquad\qquad break; \\ & \qquad end \\ & \qquad z_k=P^{-1}r_k \\ & \qquad \rho_k=r_k^Tz_k \\ & \qquad t_k=\rho_k/\rho_{k-1} \\ & \qquad p_{k+1}=z_k+t_kp_k \\ & end \end{aligned} $$
实现为:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
void PCG(
        Vector *x,
        const CsrMatrix &A,
        const Vector &b,
        const Preconditioner &P,
        ValueType error,
        IndexType max_steps
) {
    if (detail_type != DetailType::Error && detail_type != DetailType::Time)
        throw std::runtime_error{"detail type must be one of Time and Error!"};

    IndexType N = b.size;

    // r_0 = b - Ax_0
    Vector r(N);
    Residual(&r, A, b, *x);

    ValueType norm_r = sqrt(Dot(r, r));

    if (max_steps <= 0)
        max_steps = std::numeric_limits<IndexType>::max();

    // 初始解精度已经足够高
    if (norm_r < error)
        return;

    // z_0 = P^{-1} * r_0
    Vector z(N);
    P.precondition(&z, r);

    // p_1 = z_0
    Vector p = z;

    // r_k^T * z_k and r_{k-1}^T * z_{k-1}
    ValueType rho0 = Dot(r, z);
    ValueType rho1 = 0;

    // q_k
    Vector q(N);

    for (IndexType k = 1; k <= max_steps; ++k) {
        // q_k = A * p_k
        SpMv(&q, A, p);

        // xi_k = (r_{k-1}^T * z_{k-1}) / (p_k^T * q_k)
        ValueType xi = rho0 / Dot(p, q);

        // x_k = x_{k-1} + xi_k * p_k
        Add(x, *x, p, xi);

        // r_k = r{k-1} - xi_k * q_k
        Add(&r, r, q, -xi);

        norm_r = sqrt(Dot(r, r));

        if (norm_r < error)
            break;

        // z_k = P^{-1} * r_k
        P.precondition(&z, r);

        rho1 = rho0;
        rho0 = Dot(r, z);

        // p_{k+1} = z_k + u_k * p_k
        Add(&p, z, p, rho0 / rho1);
    }
}

使用方法:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
Host::CsrMatrix A;     // generate A
Host::Vector b;        // generate b

// 数据传输
SingleGPU::CsrMatrix d_A(A);
SingleGPU::Vector d_b(b);

// 构造预处理子
SingleGPU::DILUPreconditioner dilu(d_A);

SingleGPU::Vector d_x(N, 1.0);   // init x with N ones, and solve Ax=b

SingleGPU::PCG(&d_x, d_A, dilu, d_b, 1e-6, 100);  // 残量小于1e-6或者迭代100次后退出

§思考问题

1、多重着色部分的串行代码未详细说明矩阵的存取方式,而是采用L(i,j)这样的伪代码,请考虑如何在CSR存储格式下实现它。

提示1:in_degree[i] = row_offsets[i + 1] - row_offsets[i] - 1;

提示2:删除最大独立集中的节点时,如何快速地找到它们的后继节点?

2、下三角方程并行求解的代码中,每个线程只计算一行,请考虑怎样将它改成像SpMv那样多个线程计算一行。

加载评论