Tree Path Queries
Criado em: 25 de janeiro de 2025
Criado em: 25 de janeiro de 2025
Path Multiple Queries
Accuracy 1.29%
Submissions: 270+
Points: 30
You have been given a tree of n nodes. Each node is numbered from 0 to n-1. You have been given a 2D integer array edges[][] of size n-1 where each edges[i] = [u_{i}, v_{i}] denotes that there is an edge between nodes u¡ and v_{i} in the tree. You have also been given an array arr[] of size n, which denotes the value of each node in the tree.
You have to answer q-independent queries given in the 2D integer array queries [][]. Each query is of type (u, v, x) For each query, you have to multiply the value of each node on the path from node u to node v by x and return the sum of the values of all the nodes in the tree. Since the answer can be large, return it modulo 10 ^ 9 + 7
Note: Each query is independent, meaning changes made during one query do not affect others.
Examples:
Input: arr[] = [1, 2, 3], edges[][] =[ [0, 1] , [1, 2] ] queries UU = [[0, 2, 2], [1, 2, 3]]
Output: (12, 16]
Explanation: The tree is as follows:
0
1
2
For the first query, the path from 0 to 2 is 0 -> 1 -> 2 The values of the nodes code in c++
Solution Explanation
We have a tree of nodes (numbered to ) with an array arr[]
denoting each node's value. We also have a list of edges edges[][]
that describes the tree. Finally, we receive queries, each of the form . For each query, we must:
A naive approach would be, for each query:
This is too slow when and can be large (e.g., up to ).
distSum[node]
that stores the sum of node values from the root (we can pick any node as root, typically 0) down to node
.distSum[node]
to be the sum of the values on the path from the chosen root (say 0
) to node
, then:
because when we add distSum[u]
and distSum[v]
, the path from the root to LCA(u,v)
is counted twice, so we subtract it twice and then add back the LCA’s own value once.Hence, the plan:
0
(arbitrary choice).distSum[node]
: the sum of values from root to node
.depth[node]
: the depth of each node (distance in edges from the root).parent[node][0]
: the immediate parent of each node in the tree (for LCA preprocessing).parent[node][k]
= the -th ancestor of node
.lca = LCA(u, v)
.pathSum(u, v)
using distSum[]
.Because each query is independent, we do not actually modify arr[]
; we merely compute the final sum as if we had done so, then revert.
distSum
, depth
, parent[][0]
: .cpp#include <bits/stdc++.h> using namespace std; static const int MOD = 1000000007; vector<int> adj[100000]; // adjacency list long long distSum[100000]; // distSum[node] = sum of values from root to node int parent[100000][20]; // parent[node][k] = 2^k-th ancestor of node int depth[100000]; // depth of each node in the tree // We'll do a DFS from 'root' (say 0) to fill distSum, depth, parent[][0]. void dfs(int node, int par, long long currSum, int d, const vector<long long>& values) { distSum[node] = currSum; depth[node] = d; parent[node][0] = (par == -1 ? node : par); // immediate parent; if root, parent is itself (or -1). for (auto &nbr : adj[node]) { if (nbr == par) continue; dfs(nbr, node, currSum + values[nbr], d + 1, values); } } // Precompute all 2^k-th ancestors for each node. void buildLCA(int n) { // we have parent[node][0] already from DFS // compute parent[node][k] for k >= 1 for (int k = 1; k < 20; k++) { for (int v = 0; v < n; v++) { parent[v][k] = parent[ parent[v][k-1] ][k-1]; } } } // Return LCA(u, v) in O(log n). int getLCA(int u, int v) { if (depth[u] < depth[v]) swap(u, v); // 1) Lift 'u' up to the same depth as 'v'. int diff = depth[u] - depth[v]; for (int k = 0; k < 20; k++) { if (diff & (1 << k)) { u = parent[u][k]; } } if (u == v) return u; // 2) Lift both 'u' and 'v' up until their parents differ. for (int k = 19; k >= 0; k--) { if (parent[u][k] != parent[v][k]) { u = parent[u][k]; v = parent[v][k]; } } // now parent[u][0] = parent[v][0] = LCA return parent[u][0]; } int main(){ ios::sync_with_stdio(false); cin.tie(nullptr); // --- Input Section --- int n; cin >> n; vector<long long> values(n); for (int i = 0; i < n; i++) { cin >> values[i]; } // build adjacency for(int i = 0; i < n; i++){ adj[i].clear(); } for(int i = 0; i < n-1; i++){ int u, v; cin >> u >> v; // assuming 0 <= u,v < n adj[u].push_back(v); adj[v].push_back(u); } // --- Preprocessing --- // pick node 0 as root (arbitrary) // DFS to get distSum, depth, and parent[][0] dfs(0, -1, values[0], 0, values); buildLCA(n); // sum of all node values long long totalSum = 0; for (int i = 0; i < n; i++){ totalSum += values[i]; totalSum %= MOD; // can mod here or keep it in 64-bit until final step } // --- Queries --- int q; cin >> q; while(q--){ int u, v; long long x; cin >> u >> v >> x; // get LCA int lca = getLCA(u, v); // path sum = distSum[u] + distSum[v] - 2*distSum[lca] + values[lca] long long pathSum = distSum[u] + distSum[v] - 2LL * distSum[lca] + values[lca]; // newSum = totalSum + (x-1)*pathSum // take care of mod // be mindful of negative possibility of (x-1), so do mod carefully long long ans = (totalSum % MOD) + (( (x - 1) % MOD ) * (pathSum % MOD ) ) % MOD; ans = (ans % MOD + MOD) % MOD; // ensure non-negative cout << ans << "\n"; } return 0; }
dfs
function
0
, parent = -1
, current distance-sum = values[0]
, depth = 0
).distSum[node]
, depth[node]
, and parent[node][0]
for each node.distSum[node]
holds the sum of values from the root to this node
.buildLCA
function
parent[node][k]
, meaning the -th ancestor of node
.getLCA
function
u
so that depth[u] == depth[v]
.u == v
, that is the LCA. Otherwise, lift both nodes up step by step from the largest power of two until their parents differ.parent[u][0]
is the LCA.Answer each query in :
pathSum = distSum[u] + distSum[v] - 2 * distSum[lca] + values[lca]
.x
:
Since each query is independent, we do not update arr[]
or distSum[]
; we only use them to calculate the hypothetical new sum.
That’s it! This method will work efficiently for large and (up to around ), running in about time, which is acceptable in most competitive programming contexts.
#include <bits/stdc++.h>
using namespace std;
static const int MAXN = 200000;
vector<int> adj[MAXN + 1];
int subtreeSize[MAXN + 1];
int n, q;
int dfs_subtree(int u, int p) {
int sz = 1;
for (int v : adj[u]) {
if (v == p) continue;
sz += dfs_subtree(v, u);
}
subtreeSize[u] = sz;
return sz;
}
int main() {
ios::sync_with_stdio(false);
cin.tie(nullptr);
textcin >> n; for (int i = 1; i <= n; i++) { adj[i].clear(); } for (int i = 0; i < n - 1; i++) { int a, b; cin >> a >> b; adj[a].push_back(b); adj[b].push_back(a); } cin >> q; while (q--) { int x, k; cin >> x >> k; dfs_subtree(x, -1); nth_element(subtreeSize + 1, subtreeSize + k, subtreeSize + n + 1); cout << subtreeSize[k] << "\n"; } return 0;
}
Do more optimized
Below is an explanation of why the posted code is costly and some discussion of what can (and usually cannot) be done to make it faster, depending on your constraints and the exact problem statement.
Your code:
cpp#include <bits/stdc++.h> using namespace std; static const int MAXN = 200000; vector<int> adj[MAXN + 1]; int subtreeSize[MAXN + 1]; int n, q; int dfs_subtree(int u, int p) { int sz = 1; for (int v : adj[u]) { if (v == p) continue; sz += dfs_subtree(v, u); } subtreeSize[u] = sz; return sz; } int main() { ios::sync_with_stdio(false); cin.tie(nullptr); cin >> n; for (int i = 1; i <= n; i++) { adj[i].clear(); } for (int i = 0; i < n - 1; i++) { int a, b; cin >> a >> b; adj[a].push_back(b); adj[b].push_back(a); } cin >> q; while (q--) { int x, k; cin >> x >> k; // 1) Root the tree at x and compute subtree sizes for every node. dfs_subtree(x, -1); // 2) Partition/sub-select so that the k-th smallest subtreeSize is at index k nth_element(subtreeSize + 1, subtreeSize + k, subtreeSize + n + 1); // 3) Print the k-th element (i.e., the k-th smallest subtree size). cout << subtreeSize[k] << "\n"; } return 0; }
x
and computing subtree sizes with dfs_subtree(x, -1)
takes time per query in the worst case.nth_element(...)
also takes in the worst case (though average is with a smaller constant factor, it is still linear).If and can both be up to 200,000, then = operations in the worst case, which is definitely too large for a typical time limit of 1–3 seconds.
Hence, the posted approach will not scale for large and .
This problem—if it is truly “For each query , root the tree at node , list the sizes of subtrees of all nodes under that rooting, and output the -th smallest subtree size”—turns out to be very difficult to do quickly for large . The reason is that re-rooting at completely rearranges the parent-child relationships in the tree, so all subtree sizes can change.
A few major points:
Precomputing all possible re-rootings
If you only need the size of the subtree for a single node (rather than the entire distribution of all subtrees), then re-rooting can be done in after one pass of a special reroot-DP. But you want the k-th smallest among all subtree sizes, which means you need a full distribution of them, not just one node’s subtree.
Offline / data-structure approaches
Typically, data structures like Heavy-Light Decomposition, Segment Trees, or Euler Tour can help you query paths, LCA, or partial “subtree sums” or “subtree min/max.” But to get all subtree sizes of a re-rooted tree, you basically have to re-visit a large part of the tree or store a huge amount of data.
Also, “offline” approaches won’t help much, since each query fundamentally changes the root, which changes all subtree sizes globally.
Conclusion: If the problem is truly “for each query , re-root at , gather all subtree sizes, and find the -th smallest,” there is no known data structure that does that in sublinear time for arbitrary . Essentially, you are forced to do an -like procedure per query (which is exactly what your DFS + nth_element
does).
Therefore, if and are both large (like 200k), no purely combinatorial optimization or standard data-structure trick will reduce it to pass typical time limits.
Small or small
Restricted queries
A different question
Sometimes problems are simpler, for example: “What is the size of the subtree of , if the root is ?” That can be answered in after a single DFS. But your code does something more complicated—the size array for all subtrees (and then selecting the -th smallest).
dfs_subtree
results: You cannot just store subtreeSize[u]
for each root because that would require memory and time.std::partition
or partial sorts: You already do nth_element
, which is an average approach. That does not cut it down below linear.scanf
, printf
, buffering) will not rescue an = approach from timeouts in typical contest constraints.Depending on your exact constraints:
If is up to a few thousand
nth_element
) might be acceptable. Just make sure you implement DFS with adjacency lists efficiently, and use fast I/O. That is enough for smaller constraints.If is very small (like 1 to 10), and up to 200k
If the queries share the same root
sortedSubtreeSizes[k-1]
. That’s . But that only applies if the root does not change.If you only need some simpler statistic
If none of the above special conditions hold—and you truly need the distribution of all subtree sizes for a new root on each query—then you’re stuck with an approach, which is not solvable for large in a standard time limit.
nth_element
.In short: You cannot truly optimize this to sublinear or near-logarithmic per query if each query re-roots the tree and requires the entire distribution of subtree sizes. The best you can do is the approach you already have (possibly with micro-optimizations that help for small/medium ), or you change the problem / accept that it only works for small or small .
Contest Home
Quiz
Coding Problems
Problem
Submissions
Network Coverage Analysis
Accuracy: 3.51%
Submissions: 891+
Points: 20
Imagine you're a telecommunications engineer designing a network of communication satellites to cover a linear communication corridor.
You have n satellites, each with a specific coverage range defined by a 2D integer array arr. The coverage range of the i-th satellite is represented by [arr[i][0]. arr[i] [1]], meaning the satellite covers all points from arr[i][0] to arr[i][1] (both inclusive). The satellites may have varying ranges, some fully overlapping, partially intersecting. or covering only a single point.
Let, P(x) represent the number of unique intege points along the corridor that are covered by exactly x satellites.
You are given a 2D integer array query of size m. For each query, you need to determine the maximum value of P(x) for all x such that query[i][0] <x<query[i][1].
Examples:
Input: arr
Output: [2, 1]
Explanation:
C
40
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
463
64
65
66
67-
68
69
78
71-
72
73
74
75
76
= [[2, 3]. [3, 4], [5. 5). [3. 4]], query[][] = [[1, 2], [3. 4]] code in c++
Below is a representative C++ approach that handles the “maximum of for ” queries under the assumption that:
Because the coordinate range can be large (possibly up to 10^9), we cannot simply make an array of that size. Instead, we use coordinate compression plus a line-sweep (difference array) to find how many integer points have coverage = 0,1,2,…, up to . We then store these counts in a frequency array freq[x] = P(x)
. Finally, we answer queries via a range-maximum data structure (e.g., Sparse Table or Segment Tree) built on top of freq
.
coords
:
L_i
and R_i + 1
in coords
.coords
and remove duplicates.0
to C-1
, where .diff[]
of length C+1
:
start = indexOf(L_i)
in compressed spaceend = indexOf(R_i+1)
in compressed spacediff[start]++
, diff[end]--
(if end < C
, i.e. R_i+1
was not beyond the largest coordinate).cover[0] = diff[0]
, and for i=1..C-1
: cover[i] = cover[i-1] + diff[i]
.cover[i]
tells how many satellites cover all integer points in the “block” between coords[i]
and coords[i+1] - 1
(in the original coordinates).freq[x] = P(x)
.
freq[0..n] = 0
.i
in 0..C-2
(i.e., for each segment in compressed space):
length = coords[i+1] - coords[i]
(the number of integer points in that segment).c = cover[i]
(the coverage count for that entire segment).length
to freq[c]
.coords[C-1]
does not define a full segment after it, so we typically iterate up to C-2
.freq[]
:
freq
has size n+1
.st[k][i]
= max of freq[i .. i + 2^k - 1]
.freq[l+1..r-1]
.l+1 > r-1
, the range is empty => the answer is 0
.[l+1, r-1]
.Important: This solution has overall complexity roughly
(dominated by coordinate compression, building the coverage, building the Sparse Table, and then answering queries).
Below is a condensed C++ implementation. Adjust as needed (e.g., for reading input or if you want a Segment Tree instead of a Sparse Table).
cpp#include <bits/stdc++.h> using namespace std; static const long long INF = 1e18; // Sparse Table for range-maximum queries. struct SparseTable { vector<vector<long long>> st; // st[k][i] = max of length 2^k starting at i vector<int> logVal; // precomputed logs int n; void build(const vector<long long>& arr) { n = (int)arr.size(); logVal.resize(n+1, 0); for (int i = 2; i <= n; i++) logVal[i] = logVal[i/2] + 1; int maxK = logVal[n]; st.assign(maxK+1, vector<long long>(n, 0LL)); // st[0][i] = arr[i] for (int i = 0; i < n; i++) { st[0][i] = arr[i]; } // fill st[k][i] for (int k = 1; k <= maxK; k++) { int length = 1 << k; int half = length >> 1; for (int i = 0; i + length - 1 < n; i++) { st[k][i] = max(st[k-1][i], st[k-1][i + half]); } } } // query max on [L, R] long long query(int L, int R) { if (L > R) return 0; // no range => no coverage int length = R - L + 1; int k = logVal[length]; int half = 1 << k; return max(st[k][L], st[k][R - half + 1]); } }; int main(){ ios::sync_with_stdio(false); cin.tie(nullptr); int n; cin >> n; vector<long long> L(n), R(n); for(int i = 0; i < n; i++){ cin >> L[i] >> R[i]; } // 1) Collect coordinates (both start and end+1) vector<long long> coords; coords.reserve(2*n); for(int i = 0; i < n; i++){ coords.push_back(L[i]); coords.push_back(R[i] + 1); } // 2) Coordinate compression sort(coords.begin(), coords.end()); coords.erase(unique(coords.begin(), coords.end()), coords.end()); auto getIndex = [&](long long x){ // find compressed index of x // we can use binary search because coords is sorted int idx = int(std::lower_bound(coords.begin(), coords.end(), x) - coords.begin()); return idx; }; int C = (int)coords.size(); // 3) Difference array in compressed space vector<long long> diff(C+1, 0LL); for(int i = 0; i < n; i++){ int start = getIndex(L[i]); int end = getIndex(R[i] + 1); // could be C if R[i]+1 is > all coords diff[start]++; if(end < C) diff[end]--; } // 4) Build coverage array in compressed space vector<long long> cover(C, 0LL); cover[0] = diff[0]; for(int i = 1; i < C; i++){ cover[i] = cover[i-1] + diff[i]; } // 5) Compute frequency: freq[x] = how many integer points have coverage x // coverage can be 0..n, so freq size is n+1 vector<long long> freq(n+1, 0LL); // go through each segment [coords[i], coords[i+1]) in compressed space for(int i = 0; i < C-1; i++){ long long c = cover[i]; if(c < 0) c = 0; // safety check if coverage ever goes negative (shouldn't) if(c > n) c = n; // safety if coverage somehow > n (shouldn't in correct input) long long length = coords[i+1] - coords[i]; // number of integer points if(length <= 0) continue; // Add these length many points to freq[ coverage = c ] freq[(int)c] += length; } // 6) Build Sparse Table (or Segment Tree) over freq[] to answer range max queries SparseTable st; st.build(freq); // 7) Process queries int Q; cin >> Q; while(Q--){ long long Lq, Rq; cin >> Lq >> Rq; // we want max over x in (Lq, Rq) => x from Lq+1.. Rq-1 // convert that to 1-based indexing in freq? Actually freq index is coverage count. // So we just do st.query(Lq+1, Rq-1). // BUT we must clamp the range to [0..n], i.e. coverage is from 0..n. // We'll define: int leftCover = max(0LL, Lq+1); int rightCover = min((long long)n, Rq-1); // If the interval is empty, the answer is 0 if(leftCover > rightCover){ cout << 0 << "\n"; } else { long long ans = st.query(leftCover, rightCover); cout << ans << "\n"; } } return 0; }
Coordinate Compression:
Necessary when can be large (e.g. up to ). We only store coverage information on the “breakpoints” of intervals, plus the segments in between.
Range of Coverage:
freq
has size .freq
for coverage above because it can’t happen that more than satellites cover the same point.Handling Queries:
freq
array.0
(by the problem statement or a safe assumption).Sparse Table vs. Segment Tree:
freq[]
is . Each query is .Edge Cases:
freq[0]
= “size of entire corridor?” (but the corridor itself might be unspecified). Usually not a standard scenario.freq[]
: .Hence total is , which is typically acceptable for up to around 10^5–2*10^5 in competitive programming.
freq[x]
, each query is just a “range max” on that array.Accuracy: 2.85%
Submissions: 1K+
Points: 30
In Geek Town, there are n shops, and each shop sell components required to construct m different types of items. The sizes of the components for each type, across all the shops, are given in a 2D integer array shops of size nm where shops[i][j] represents the size of the component of the j-th type available in the i-th shop. There is an infinite supply of each type of component in every shop.
To build an item of the j-th type, Geek can only use components of the j-th type. To construct an item of size x using components of size y, the number of components required is ceil(x/y). Geek is tasked with constructing one item of each of the m type, but all iterns must be of the same size.
Geek can visit at most two shops to purchase the required components, and has a budget of at most k rupees. Each component costs 1 rupee.
Your task is to determine the maximum size Geek can achieve for all m types of items so that all items are the same size and the total cost does not exceed k rupees.
Examples:
Input: n = 4 m = 3 shops[] =[[ matrix 3 ,2,5] [1, 3, 2], [2. 2. 2], [1, 1, 1] ] k = 11
Output: 12
Explanation: Geek can select shop 1 and shop 2. Total cost for size 12: 1st item from shop1 with cost = ceil(12/3) 4. 2nd item from shop2 with cost = ceil(12/3) 4. 3rd item from shop1 with cost = ceil(12/5) = 3. Code in c++
Below is a step-by-step explanation and a corresponding C++ solution for the problem:
Problem Restatement
We have:
- shops (indexed to ).
- types of components/items (indexed to ).
- A 2D array
shops[n][m]
, whereshops[i][j]
is the size of the j-th type of component sold by the i-th shop. We have infinite supply in each shop.- We want to build one item of each of the types, all of the same size .
- To build an item of size with component-size , we need such components (each component costs 1 rupee).
- We can buy components from at most two shops in total for all items. (Equivalently, each type must come from exactly one of those chosen shops, but we can choose up to two shops overall.)
- We have a budget (total rupees we can spend across all items).
Goal: Maximize the common item-size subject to the total cost .
We will binary search on .
low = 0
and high = max_possible_size
, where max_possible_size
can be something like . (This is a safe though possibly loose upper bound, because if is bigger than the largest shop-size , the cost almost certainly exceeds .)For a candidate size , we need a feasibility check:
Naïve approach to the feasibility check for a given :
Complexity concerns:
costOne[i]
is .However, if is up to a few hundred or a thousand, and is also in that moderate range, an optimized C++ solution might pass.
If the input constraints are large (e.g. up to ), there is no direct solution that would run in typical time limits. In that case, a more advanced or problem-specific data-structure approach is needed (and is quite complex).
But for smaller constraints (e.g. and ), the below approach can be made to work in practice.
cpp#include <bits/stdc++.h> using namespace std; // Safely compute ceil(x/y) for positive x,y without floating error: static inline long long ceilDiv(long long x, long long y) { // x, y > 0 return (x + y - 1) / y; } int main(){ ios::sync_with_stdio(false); cin.tie(nullptr); // 1) Read input int n, m; cin >> n >> m; vector<vector<long long>> shops(n, vector<long long>(m)); long long maxSize = 0; // track max shops[i][j] for(int i = 0; i < n; i++){ for(int j = 0; j < m; j++){ cin >> shops[i][j]; maxSize = max(maxSize, shops[i][j]); } } long long k; cin >> k; // total budget // 2) Precompute costOne[i] for the "worst-case" X if needed on the fly // We'll do it inside the feasibility function so we can't do a single pass. // 3) Define a function "feasible(X)" that checks whether we can build items of size X within budget k. auto canBuild = [&](long long X) -> bool { // costOne[i] = sum_j ceil(X / shops[i][j]) // We'll compute these on the fly: vector<long long> costOne(n, 0LL); // a) compute costOne[] for each shop // also keep track of minCost1 = min(costOne[i]) long long minCost1 = LLONG_MAX; for(int i = 0; i < n; i++){ long long total = 0; for(int j = 0; j < m; j++){ // ceil(X / shops[i][j]) long long c = ceilDiv(X, shops[i][j]); // If total > k already, we can break early (small speedup). if( (total += c) > k ) { break; } } costOne[i] = total; minCost1 = min(minCost1, total); } // If we can do it with a single shop, great: if(minCost1 <= k) return true; // b) Otherwise, we need to check pairs i1, i2. // We want sum_j min( costOne[i1][j], costOne[i2][j] ) <= k // But we haven't stored costOne[i][j] individually, only sums. // We'll have to do another nested loop or store partial data. // For better speed, let's store costArrays[i][j]. static bool storeInitialized = false; static vector<vector<long long>> costArrays; if(!storeInitialized){ costArrays.assign(n, vector<long long>(m, 0LL)); storeInitialized = true; } // Fill costArrays[i][j] = ceil(X / shops[i][j]) for this X for(int i = 0; i < n; i++){ for(int j = 0; j < m; j++){ costArrays[i][j] = ceilDiv(X, shops[i][j]); } } // Now try all pairs. // We'll do a best possible optimization: if costOne[i1] alone is already > k, and costOne[i2] alone is > k, // we still want to see if they help each other. We can't skip in general, because two "bad" shops // might complement each other well. // // We'll implement the direct O(n^2 * m) approach, but break early whenever partial sums exceed k. for(int i1 = 0; i1 < n; i1++){ for(int i2 = i1+1; i2 < n; i2++){ long long total = 0; for(int j = 0; j < m; j++){ total += min(costArrays[i1][j], costArrays[i2][j]); if(total > k) { break; // no need to continue } } if(total <= k) return true; } } return false; }; // 4) Binary Search for the maximum X // Let's define a safe high bound: maxPossible = maxSize * k // (Because if X > maxSize*k, cost is at least (X / maxSize) which is > k for one item.) // We can add +1 or so if needed, but this should be enough. long long low = 0; long long high = maxSize * k + 1; // +1 to ensure we do not miss edge long long ans = 0; while(low <= high){ long long mid = (low + high) / 2; if(canBuild(mid)){ ans = mid; // mid is feasible, try bigger low = mid + 1; } else { high = mid - 1; } } // 5) Print the result cout << ans << "\n"; return 0; }
ceilDiv(x, y)
: A safe integer way to compute for positive integers without floating-point issues.
Binary Search:
0
to maxSize * k + 1
.mid = (low+high)/2
, then call canBuild(mid)
.canBuild(X)
:
costOne[i]
: the cost if we used only shop i
. That is the sum over all types of . If min(costOne[i]) <= k
, we’re done.i1, i2
together can achieve cost . This requires:
We do a nested loop over (i1, i2)
, plus an inner loop over j
to sum the minima. If any pair is within budget, canBuild(X)
is true
.Complexity:
canBuild(X)
.n, m
(e.g. up to 1000) it can pass in optimized C++.