Реши на c++ Задача D. Маршруты Красной поляны Имя ...

Creado el: 8 de enero de 2026

Respondido usando GPT-5.2 Thinking por Chat01

Pregunta

Реши на c++ Задача D. Маршруты Красной поляны
Имя входного файла: stdin
Имя выходного файла: stdout
Ограничение по времени: 0.5 second
Ограничение по памяти: 512 megabytes
На известном курорте Красная поляна для туристов есть N привлекательных для отдыха мест,
между которыми проложено N дорожек одинаковой длины. По каждой дорожке можно переме-
щаться в любую сторону. Дорожки устроены так, что из любого места можно добраться в любое
другое, однако иногда для этого приходится делать слишком много перемещений. Прежде чем про-
кладыватьновыедорожки,Администрациякурортахочетвыяснить,сколькосуществуетразличных
путей между всеми парами мест для отдыха, расположенными друг от друга дальше всего.
Под «местами, расположенными друг от друга дальше всего,» следует понимать пары, кратчай-
шее расстояние между которыми максимально. В качестве ответа требуется посчитать суммарное
количество кратчайших путей между всеми подобными парами мест.
Формат входных данных
В первой строке находится число N (3 ⩽ N ⩽ 200 000). В каждой из следующих N строк
находятся 2 числа — номера мест отдыха, между которыми проложена соответствующая дорожка.
Гарантируется, что все дорожки соединяют разные пары мест отдыха.
Формат выходных данных
Выведите одно число — количество кратчайших путей между между всеми парами наиболее
удаленных друг от друга мест для отдыха.
Примеры
stdin stdout
6
1 2
1 3
2 4
4 3
4 5
4 6
4
1 2
1 3
1 4

Pensamiento

Finalizing C++ algorithm with fast input handling

I’m finalizing the C++ implementation. I’ll use custom input handling for faster reading, replacing cin with getchar_unlocked. I’ll read the nodes and edges, allocate memory efficiently, and build the cycle detection logic carefully, using static buffers and dynamic arrays to handle cycle nodes and their neighbors.

Planning tree and cycle handling in C++

I'm setting up my arrays to handle both the cycle nodes and tree traversal. Once we’re identifying cycle nodes, I’ll store their neighbors and track traversal order. I'll use a stack for depth-first search (DFS) and update the depth, count, and diameter information in each component. Lastly, calculating the cross-diameter will give the final result.

Handling large values with care

I’m using unsigned long long for large values, ensuring no overflow when calculating counts and sums. Even though multiplications fit within ull, I’ll cast to __int128 for safety just in case. I’m also considering memory efficiency, using std::vector to avoid excessive allocations while maintaining pointers.

Handling memory and efficiency concerns

I’m wondering about the memory overhead with managing many indices, especially with deques. Using vector<int> and a single deque per block might help, but it might still be inefficient with frequent allocations. Another approach involves maintaining a frequency map and a monotonic deque. I'll optimize the sum of counts for maximal keys efficiently.

Managing monotonic deque and sums

I need to maintain a monotonic deque of indices, tracking the maximum key and its associated sum of counts. When a new index with a larger key is added, smaller ones are popped. The maximal key always stays at the front. To manage sums efficiently, I’ll store groups of indices and track their expiration based on boundaries.

Optimizing sum computation in a monotonic deque

I’ll store the last index but avoid partial subtractions by caching the sum for the front key group. This way, I can avoid repetitive recalculations. By maintaining a cached sum and updating it when indices leave the window, I avoid scanning again for the same front group. This makes things efficient!

Managing front sum with monotonic deque

If the key is equal to the front key (max key), then any indices behind it will have the same or smaller key. The indices with the max key are contiguous at the front. When a new key is added, if it’s equal to the front key, the front group expands, and I update the sum by adding counts accordingly.

Simplifying the deque approach

When the frontSum becomes zero and the frontKey needs to change, I'll recompute the frontSum by scanning the indices of the new frontKey. This won't require inner deques—just maintaining the monotonic deque and recalculating the sum when necessary, making the algorithm more efficient by amortizing the work to O(N).

Refining front group management

When all keys are equal to the frontKey, the back index will also match and will be popped. This update requires adjusting the frontSum and managing counts more carefully. However, with caching, I can efficiently track changes to the frontSum, especially when new indices are pushed with the same key.

Managing front group and outdated indices

When removing outdated indices, I need to be careful with the frontSum and frontKey updates. After popping all elements smaller than k, the front sum is reset, and I’ll need to recalculate the frontSum by scanning until the key changes. The scanning is efficient, with amortized O(N) complexity.

Thought for 10m 44s

Respuesta

cpp
#include <bits/stdc++.h> using namespace std; struct FastScanner { static constexpr int BUFSIZE = 1 << 20; int idx = 0, size = 0; char buf[BUFSIZE]; inline char readChar() { if (idx >= size) { size = (int)fread(buf, 1, BUFSIZE, stdin); idx = 0; if (size == 0) return 0; } return buf[idx++]; } template <class T> bool readInt(T &out) { char c; do { c = readChar(); if (!c) return false; } while (c <= ' '); bool neg = false; if (c == '-') { neg = true; c = readChar(); } T val = 0; while (c > ' ') { val = val * 10 + (c - '0'); c = readChar(); } out = neg ? -val : val; return true; } }; int main() { ios::sync_with_stdio(false); cin.tie(nullptr); FastScanner fs; int N; if (!fs.readInt(N)) return 0; vector<int> U(N), V(N); vector<int> deg(N + 1, 0); for (int i = 0; i < N; i++) { int a, b; fs.readInt(a); fs.readInt(b); U[i] = a; V[i] = b; deg[a]++; deg[b]++; } vector<vector<int>> g(N + 1); for (int i = 1; i <= N; i++) g[i].reserve(deg[i]); for (int i = 0; i < N; i++) { int a = U[i], b = V[i]; g[a].push_back(b); g[b].push_back(a); } // 1) Find cycle by leaf-pruning (graph is connected, edges = nodes => unicyclic) vector<int> d = deg; deque<int> q; vector<char> removed(N + 1, 0); for (int i = 1; i <= N; i++) if (d[i] == 1) q.push_back(i); while (!q.empty()) { int v = q.front(); q.pop_front(); removed[v] = 1; for (int u : g[v]) { if (!removed[u]) { if (--d[u] == 1) q.push_back(u); } } } vector<char> isCycle(N + 1, 0); vector<int> cycleNodes; cycleNodes.reserve(N); for (int i = 1; i <= N; i++) { if (!removed[i]) { isCycle[i] = 1; cycleNodes.push_back(i); } } // Build cycle order int start = cycleNodes[0]; vector<int> cycA(N + 1, 0), cycB(N + 1, 0); for (int v : cycleNodes) { for (int u : g[v]) if (isCycle[u]) { if (!cycA[v]) cycA[v] = u; else cycB[v] = u; } } vector<int> cycOrder; cycOrder.reserve(cycleNodes.size()); int prev = 0, cur = start; while (true) { cycOrder.push_back(cur); int nxt = (cycA[cur] == prev ? cycB[cur] : cycA[cur]); prev = cur; cur = nxt; if (cur == start) break; } int m = (int)cycOrder.size(); int K = m / 2; // 2) Root each attached tree at its cycle vertex, compute: // height[root] = max depth from root in its tree (excluding cycle edges), // cntHeight[root] = number of nodes achieving that height, // compDiamLen[root] and compDiamPairs[root] for tree-only diameters. using ull = unsigned long long; vector<int> parent(N + 1, -1), rootOf(N + 1, -1); vector<int> depth(N + 1, 0); vector<ull> cnt(N + 1, 1); vector<int> compDiamLen(N + 1, 0); vector<ull> compDiamPairs(N + 1, 0); vector<int> st; st.reserve(N); for (int c : cycleNodes) { rootOf[c] = c; for (int u : g[c]) if (!isCycle[u]) { parent[u] = c; rootOf[u] = c; st.push_back(u); } } vector<int> trav; trav.reserve(N); while (!st.empty()) { int v = st.back(); st.pop_back(); trav.push_back(v); for (int u : g[v]) { if (u == parent[v] || isCycle[u]) continue; parent[u] = v; rootOf[u] = rootOf[v]; st.push_back(u); } } auto process_node = [&](int v, int compRoot) { int bestDepth = 0; ull bestCnt = 1; int maxd = -1, secondd = -1; ull sum1 = 0, sum2 = 0; ull tiePairs = 0; int sources1 = 0; auto consider = [&](int dd, ull cc) { if (dd > maxd) { secondd = maxd; sum2 = sum1; maxd = dd; sum1 = cc; tiePairs = 0; sources1 = 1; } else if (dd == maxd) { tiePairs += (ull)((__int128)sum1 * cc); sum1 += cc; sources1++; } else if (dd > secondd) { secondd = dd; sum2 = cc; } else if (dd == secondd) { sum2 += cc; } }; consider(0, 1); // self as an endpoint (depth 0, count 1) for (int u : g[v]) { if (isCycle[u]) continue; if (parent[u] != v) continue; // children only int dd = depth[u] + 1; ull cc = cnt[u]; if (dd > bestDepth) { bestDepth = dd; bestCnt = cc; } else if (dd == bestDepth) { bestCnt += cc; } consider(dd, cc); } depth[v] = bestDepth; cnt[v] = (bestDepth == 0 ? 1ULL : bestCnt); int pathLen = 0; ull pairs = 0; if (sources1 >= 2) { pathLen = 2 * maxd; pairs = tiePairs; } else if (secondd != -1) { pathLen = maxd + secondd; pairs = (ull)((__int128)sum1 * sum2); } if (pairs) { if (pathLen > compDiamLen[compRoot]) { compDiamLen[compRoot] = pathLen; compDiamPairs[compRoot] = pairs; } else if (pathLen == compDiamLen[compRoot]) { compDiamPairs[compRoot] += pairs; } } }; for (int i = (int)trav.size() - 1; i >= 0; --i) { int v = trav[i]; process_node(v, rootOf[v]); } for (int c : cycleNodes) { process_node(c, c); } int maxTreeDiam = 0; for (int c : cycleNodes) maxTreeDiam = max(maxTreeDiam, compDiamLen[c]); // 3) Diameter through the cycle (using heights), and count of shortest paths for diametral pairs. vector<int> h(m); vector<ull> ch(m); for (int i = 0; i < m; i++) { int node = cycOrder[i]; h[i] = depth[node]; ch[i] = cnt[node]; } vector<int> h2(2 * m); vector<ull> ch2(2 * m); vector<long long> keyArr(2 * m); for (int i = 0; i < 2 * m; i++) { h2[i] = h[i % m]; ch2[i] = ch[i % m]; keyArr[i] = (long long)h2[i] + i; // (h[j] + j) } deque<int> dq; // indices, with non-increasing keyArr long long frontKey = LLONG_MIN; ull frontSum = 0; auto push_idx = [&](int idx) { long long k = keyArr[idx]; ull c = ch2[idx]; while (!dq.empty() && keyArr[dq.back()] < k) { if (keyArr[dq.back()] == frontKey) frontSum -= ch2[dq.back()]; dq.pop_back(); } if (dq.empty()) { dq.push_back(idx); frontKey = k; frontSum = c; } else { dq.push_back(idx); if (k == frontKey) frontSum += c; } }; auto pop_old = [&](int leftBound) { while (!dq.empty() && dq.front() < leftBound) { if (keyArr[dq.front()] == frontKey) frontSum -= ch2[dq.front()]; dq.pop_front(); } if (dq.empty()) { frontKey = LLONG_MIN; frontSum = 0; return; } if (frontSum == 0) { frontKey = keyArr[dq.front()]; frontSum = 0; for (int idx : dq) { if (keyArr[idx] != frontKey) break; frontSum += ch2[idx]; } } }; long long crossMax = LLONG_MIN; ull crossCount = 0; // init window for i=0: j in [1..K] for (int j = 1; j <= K; j++) push_idx(j); int r = K + 1; for (int i = 0; i < m; i++) { int left = i + 1; int right = i + K; while (r <= right) push_idx(r++); pop_old(left); if (dq.empty()) continue; long long val = (long long)h2[i] + frontKey - i; // (h[i]-i) + (h[j]+j) ull ways = (ull)((__int128)ch2[i] * frontSum); // directed counting (opposite pairs counted twice -> 2 shortest paths) if (val > crossMax) { crossMax = val; crossCount = ways; } else if (val == crossMax) { crossCount += ways; } } long long D = max<long long>(maxTreeDiam, crossMax); ull ans = 0; for (int c : cycleNodes) { if ((long long)compDiamLen[c] == D) ans += compDiamPairs[c]; } if (crossMax == D) ans += crossCount; cout << ans << "\n"; return 0; }

Compartir este Q&A