解题套路总结
Creato il: 2 febbraio 2025
Risposto usando O3 Mini High da Chat01
Creato il: 2 febbraio 2025
Risposto usando O3 Mini High da Chat01
帮我把一些题目总结一下 给我一个解题套路模版 方便我快速做这样的题目 下面是我的题目的介绍和例题,找出套路并给我一个通用的SOP解决模版,让我能在90分钟内高效快速地完成
What needs to be done
1 problem in 4 levels in 90 mins
4 Levels
Level 1 is normally simple, considered Leetcode easy
Level 2 will be based on Level 1 and harder, considered Leetcode medium
Level 3 will be based on Level 2 and harder, considered Leetcode medium
Level 4 will be based on Level 2 and harder, considered Leetcode Hard
Judge
The total score of problems should be 1000 with 250 for each level
Ref how the score works
<scoring>
We are excited to release new coding score system across all CodeSignal Skills Evaluation Frameworks! You may be requested to complete any of these evaluations on behalf of various companies, and with this new coding score system, you can look forward to companies’ being able to better gauge your core skills and level of expertise.
If you have completed a skills evaluation in the past, CodeSignal will automatically convert your historical coding scores to Assessment Score. Thus, your existing coding reports will remain shareable with companies that request them in the future!
Read on to learn how to interpret the results of Assessment Score, and how it is calculated.
About Assessment Score
The new range for our coding score was recalibrated by our team of IO Psychologists after reviewing data from tens of thousands of assessment completions. Previously, the General Coding Framework used a 600-850 score range, and other Frameworks produced a raw score out 1,000. Now, when you complete a skills evaluation that is framework-based, your coding score will range from 200-600 and your coding report will give visibility into the mastery of individual skills being evaluated.
Overall Coding Score: Each Skills Evaluation Framework you complete will result in a single Coding Score that quantifies your core skills. This number will range from 200-600, with higher scores indicating that you successfully completed more questions in the assessment.
Skill area proficiency: Assessment Score also gives you insight into your level of proficiency across the various skills that each Skills Evaluation Framework measures. This data is meant to inform you of areas where you may need more development.
The score interpretation is very straightforward, higher scores indicate that candidates successfully completed more of the assessment, thus demonstrating more of the relevant skills being evaluated by the assessment. Under that lens, candidates should not spend all of their time on a single question, if possible. The more questions a candidate is able to successfully complete within the time frame the higher their score will be. Although speed is not an explicit element of the scoring, it is implicitly part of it such that candidates want to successfully complete as many questions as they can, as previously mentioned.
Screen Shot 2023-11-16 at 6.37.27 PM.png
How Assessment Score is Calculated
The Assessment Score includes a two-tiered scoring system contributing to the overall score and a detailed breakdown of skill proficiencies based on performance. Questions within a Skills Evaluation Framework are organized and grouped into modules. Within the two-tiered system used for calculating the overall score, the first tier (i.e., the base points) treats all modules or question groups equally; the second tier (i.e., the bonus points) is awarded upon full completion of a module.
Assessment Score uses a two-tiered scoring system for each module (a set of 1 or more questions). Note: Bonuses are derived from normative difficulty and discrimination parameters for all questions within the module.
First tier: The first tier refers to the base points you can earn by successfully completing all questions within a module within the Skills Evaluation Framework. As you successfully completes more questions within a module, you can earn up to the module’s base point value.
Second tier: The second tier refers to the bonus points you may be awarded based on performance. You can only earn bonus points for a module if you successfully solve 100% of the questions within the module. More bonus points are awarded for completing modules that are higher in difficulty.
Final Score: Your final score ranges from 200 to 600.
Please note that if you completed an evaluation on CodeSignal in the past, your coding report will reflect the new Assessment Score format going forward. In these cases, the correctness of your solutions has not changed; rather, the original correctness of your code is simply recalculated in a new and improved format.
Skill Proficiency
CodeSignal also offers qualitative feedback on the individual skills each Skills Evaluation Framework measures, separate from the Assessment Score overall score. The feedback system was designed to: 1) provide companies and developers with greater insights into opportunities for your growth and development once hired, and 2) improve your experience by providing individualized feedback on specific skill areas where you currently excel or have the potential to develop their skills further. The feedback system includes qualitative descriptions of skill proficiency levels ranging from “Developing” to “Expert,” and a visualization of candidate skill proficiency levels, for the skills areas measured in each Skills Evaluation Framework. For example, skill areas of the General Coding Framework are Basic Coding, Data Manipulation, Ease of Implementation, and Problem-Solving.
Skill proficiency levels for each Skills Evaluation Framework are determined based on input from subject matter experts. Given that a specific skill may be relevant to multiple modules, candidates have increased opportunities to demonstrate skill proficiency by successfully completing more than one module. The level assigned depends on how successful completion of the modules linked to each skill area. The four possible skill proficiency levels are:
Developing, candidate displays minimal skill proficiency
Intermediate, candidate displays an average degree of skill proficiency
Advanced, candidate displays an above average degree of skill proficiency
Expert, candidate displays the highest level of skill proficiency
Overall, this new system is designed to maintain scoring consistency across Skills Evaluation Frameworks, increase the precision of measuring candidate skills, and improve fairness of evaluation results. We look forward to your next evaluation!
Questions about this new scoring system? Please contact [email protected] for assistance.
</scoring> <examples> 银行账户管理系统 2.0 =================实现一个进阶版的银行账户管理系统,模拟更复杂的银行业务操作。总时间限制:90分钟。
描述:
实现基础的银行账户操作,包括账户创建、存取款和余额查询。
功能:
CREATE_ACCOUNT <account_id> <initial_balance>
DEPOSIT <account_id> <amount>
WITHDRAW <account_id> <amount>
CHECK_BALANCE <account_id>
描述:
实现现金返现功能,不同类型的交易有不同的返现比例。
功能:
SET_CASHBACK_RATE <transaction_type> <rate>
MAKE_PURCHASE <account_id> <amount> <type>
GET_CASHBACK_SUMMARY <account_id>
描述:
实现智能支付路由系统,自动选择最优支付方式。
功能:
ADD_PAYMENT_METHOD <account_id> <method_id> <type>
SMART_PAY <account_id> <amount> <category>
GET_PAYMENT_STATS <account_id>
描述:
实现多账户管理和深度分析功能。
功能:
LINK_ACCOUNTS <primary_id> <secondary_id>
GET_SPENDING_PATTERN <account_id> <timeframe>
OPTIMIZE_SPENDING <account_id>
示例:
pythonqueries = [ ["CREATE_ACCOUNT", "acc1", "1000"], ["SET_CASHBACK_RATE", "SHOPPING", "0.02"], ["MAKE_PURCHASE", "acc1", "500", "SHOPPING"], ["GET_CASHBACK_SUMMARY", "acc1"], ["ADD_PAYMENT_METHOD", "acc1", "card1", "CREDIT_CARD"], ["SMART_PAY", "acc1", "300", "DINING"], ["GET_SPENDING_PATTERN", "acc1", "MONTHLY"] ] # 预期输出: [ "CREATED", "RATE_SET", "SUCCESS(10.0)", "SHOPPING(10.0)", "METHOD_ADDED", "PAID_VIA_card1", "DINING(60%), SHOPPING(40%)" ] 银行交易系统 =========== 说明 ---- 您的任务是实现一个处理账户、交易和分析的银行交易系统。该系统应分级实现,每个级别都会增加更多的复杂性和功能。 要求 ---- 根据以下级别规范设计您的实现: - 第1级:基本交易操作和账户管理 - 第2级:交易分析和报告 - 第3级:计划交易管理 - 第4级:账户合并并保留历史记录 注意:系统将接收一系列查询,最终输出应为一个字符串数组,表示所有查询的返回值。每个查询将只调用一个操作。 第1级:基本操作 ------------- 系统应支持基本的账户操作和交易。 操作: 1. CREATE_ACCOUNT <账户ID> <初始余额> - 创建具有给定初始余额的新银行账户 - 初始余额必须非负 - 成功时返回:"ACCOUNT_CREATED" - 如果账户已存在,返回:"ACCOUNT_ALREADY_EXISTS" 2. DEPOSIT <账户ID> <金额> - 向指定账户存入金额 - 金额必须为正数 - 成功时返回:"DEPOSIT_SUCCESS" - 如果账户不存在,返回:"ACCOUNT_NOT_FOUND" 3. TRANSFER <转出账户> <转入账户> <金额> - 在账户之间转账 - 金额必须为正数 - 成功时返回:"TRANSFER_SUCCESS" - 如果任一账户不存在,返回:"ACCOUNT_NOT_FOUND" - 如果转出账户余额不足,返回:"INSUFFICIENT_BALANCE" 4. GET_BALANCE <账户ID> - 返回当前余额的字符串表示 - 如果账户不存在,返回:"ACCOUNT_NOT_FOUND" 第2级:分析功能 ------------- 实现交易分析功能。 操作: 1. GET_TOP_SPENDERS <k> - 返回支出最多的前k个账户 - 返回格式:"账户1(金额1), 账户2(金额2), ..." - 按支出金额降序排序,金额相同时按账户ID字典序排序 - 如果k超过账户总数,返回所有账户 2. GET_TRANSACTION_HISTORY <账户ID> - 返回账户的所有交易记录 - 返回格式:"类型1(金额1), 类型2(金额2), ..." - 类型包括:DEPOSIT(存款), TRANSFER_IN(转入), TRANSFER_OUT(转出) - 按时间顺序排序 - 如果账户不存在,返回:"ACCOUNT_NOT_FOUND" 第3级:计划交易 ------------- 实现对未来日期交易的支持。 操作: 1. SCHEDULE_TRANSFER <交易ID> <转出账户> <转入账户> <金额> <时间戳> - 安排未来的转账交易 - 成功时返回:"TRANSFER_SCHEDULED" - 如果交易ID已存在,返回:"TRANSFER_ID_EXISTS" - 如果任一账户不存在,返回:"ACCOUNT_NOT_FOUND" 2. CANCEL_TRANSFER <交易ID> - 取消计划的转账 - 成功时返回:"TRANSFER_CANCELLED" - 如果交易不存在,返回:"TRANSFER_NOT_FOUND" - 如果交易已执行,返回:"TRANSFER_ALREADY_EXECUTED" 第4级:账户合并 ------------- 实现账户合并功能,同时保留交易历史。 操作: 1. MERGE_ACCOUNTS <账户ID1> <账户ID2> <新账户ID> - 将两个账户合并为一个新账户 - 成功时返回:"ACCOUNTS_MERGED" - 如果任一源账户不存在,返回:"ACCOUNT_NOT_FOUND" - 如果新账户ID已存在,返回:"ACCOUNT_ALREADY_EXISTS" - 合并余额并保留交易历史 2. GET_MERGED_HISTORY <账户ID> - 返回包括源账户在内的完整交易历史 - 格式与GET_TRANSACTION_HISTORY相同 - 如果账户不存在,返回:"ACCOUNT_NOT_FOUND" - 如果不是合并账户,返回普通交易历史 示例 ---- ```python 查询 = [ ["CREATE_ACCOUNT", "acc1", "1000"], ["CREATE_ACCOUNT", "acc2", "500"], ["TRANSFER", "acc1", "acc2", "300"], ["GET_BALANCE", "acc1"], ["GET_BALANCE", "acc2"], ["GET_TOP_SPENDERS", "2"], ["SCHEDULE_TRANSFER", "t1", "acc2", "acc1", "100", "2024-01-01"], ["MERGE_ACCOUNTS", "acc1", "acc2", "merged_acc"] ] # 预期输出: [ "ACCOUNT_CREATED", "ACCOUNT_CREATED", "TRANSFER_SUCCESS", "700", "800", "acc1(300), acc2(0)", "TRANSFER_SCHEDULED", "ACCOUNTS_MERGED" ] Anthropic Online Assessment -- Cloud Storage System Instructions Your task is to implement a simple cloud storage system. All operations that should be supported are listed below. Solving this task consists of several levels. Subsequent levels are opened when the current level is correctly solved. You always have access to the data for the current and all previous levels. Requirements Your task is to implement a simple cloud storage system that maps objects (files) to their metainformation. Specifically, the storage system should maintain files along with some information about them (name, size, etc.). Note that this system should be in-memory, you do not need to work with the real filesystem. Plan your design according to the level specifications below: • Level 1: The cloud storage system should support adding new files, retrieving, and copying files. • Level 2: The cloud storage system should support finding files by matching prefixes and suffixes. • Level 3: The cloud storage system should support adding users with various capacity limits. • Level 4: The cloud storage system should support compressing and decompressing files. To move to the next level, you need to pass all the tests at this level. Note You will receive a list of queries to the system, and the final output should be an array of strings representing the returned values of all queries. Each query will only call one operation. It is guaranteed that the given queries will never call operations that result in collisions between file and directory names. Level 1 The cloud storage system should support operations to add files, copy files, and get files stored on the system. ADD_FILE <name> <size> Should add a new file name to the storage. size is the amount of memory required in bytes. The current operation fails if a file with the same name already exists. Returns "true" if the file was added successfully or "false" otherwise. COPY_FILE <nameFrom> <nameTo> Should copy the file at nameFrom to nameTo. The operation fails if nameFrom points to a file that does not exist or points to a directory. The operation fails if the specified file already exists at nameTo. Returns "true" if the file was copied successfully or "false" otherwise. GET_FILE_SIZE <name> Should return a string representing the size of the file name if it exists, or an empty string otherwise. Examples The example below shows how these operations should work (the section is scrollable to the right): queries = [ ["ADD_FILE", "/dir1/dir2/file.txt", "10"], ["COPY_FILE", "/not-existing.file", "/dir1/file.txt"], ["COPY_FILE", "/dir1/dir2/file.txt", "/dir1/file.txt"], ["ADD_FILE", "/dir1/file.txt", "15"], ["COPY_FILE", "/dir1/file.txt", "/dir1/dir2/file.txt"], ["GET_FILE_SIZE", "/dir1/file.txt"], ["GET_FILE_SIZE", "/not-existing.file"] ] Explanations: - returns `"true"`; adds file `"/dir1/dir2/file.txt"` of 10 bytes - returns `"false"`; the file `"/not-existing.file"` does not exist - returns `"true"`; adds file `"/dir1/file.txt"` of 10 bytes - returns `"false"`; the file `"/dir1/file.txt"` exists already - returns `"false"`; the file `"/dir1/dir2/file.txt"` exists already - returns `"10"` - returns `""`; the file `"/not-existing.file"` does not exist The output should be: ["true", "false", "true", "false", "false", "10", ""] Level 2 Implement support for retrieving file names by searching directories via prefixes and suffixes. FIND_FILE <prefix> <suffix> Should search for files with names starting with prefix and ending with suffix. Returns a string representing all matching files in this format: "<name1>(<size1>), <name2>(<size2>), ..." The output should be sorted in descending order of file sizes or, in the case of ties, lexicographically. If no files match the required properties, should return an empty string. Examples The example below shows how these operations should work (the section is scrollable to the right): queries = [ ["ADD_FILE", "/root/dir/another_dir/file.mp3", "10"], ["ADD_FILE", "/root/file.mp3", "5"], ["ADD_FILE", "/root/music/file.mp3", "7"], ["COPY_FILE", "/root/music/file.mp3", "/root/dir/file.mp3"], ["FIND_FILE", "/root", ".mp3"], ["FIND_FILE", "/root", "file.txt"], ["FIND_FILE", "/dir", "file.mp3"] ] Explanations: - returns `"true"` - returns `"true"` - returns `"true"` - returns `"true"` - returns `"/root/dir/another_dir/file.mp3(10), /root/dir/file.mp3(7), /root/music/file.mp3(7), /root/file.mp3(5)"` - returns `""`; there is no file with the prefix `"/root"` and suffix `"file.txt"` - returns `""`; there is no file with the prefix `"/dir"` and suffix `"file.mp3"` The output should be: ["true", "true", "true", "true", "/root/dir/another_dir/file.mp3(10), /root/dir/file.mp3(7), /root/music/file.mp3(7), /root/file.mp3(5)", "", ""] Level 3 Implement support for different users sending queries to the system. All users share a common filesystem in the cloud storage, but each user is assigned an individual storage capacity limit. ADD_USER <userId> <capacity> Should add a new user to the system, with capacity as their storage limit in bytes. The total size of all files owned by userId cannot exceed capacity. The operation fails if a user with userId already exists. Returns "true" if a user with userId is successfully created, or "false"otherwise. ADD_FILE_BY <userId> <name> <size> Should behave in the same way as the ADD_FILE from Level 1, but the added file should be owned by the user with userId. A new file cannot be added to the storage if doing so will exceed the user’s capacity limit. Returns a string representing the remaining storage capacity for userId if the file is successfully added or an empty string otherwise. Note All queries calling the ADD_FILE operation implemented during Level 1 are run by the user with userId = "admin", who has unlimited storage capacity. Also, assume that the COPY_FILE operation preserves the ownership of the original file. UPDATE_CAPACITY <userId> <capacity> Should change the maximum storage capacity for the user with userId. If the total size of all user’s files exceeds the new capacity, the largest files (sorted lexicographically in case of a tie) should be removed from the storage until the total size of all remaining files will no longer exceed the new capacity. Returns a string representing the number of removed files, or an empty string if a user with userId does not exist. Examples The example below shows how these operations should work (the section is scrollable to the right): queries = [ ["ADD_USER", "user1", "125"], ["ADD_USER", "user1", "100"], ["ADD_USER", "user2", "100"], ["ADD_FILE_BY", "user1", "/file.med", "30"], ["ADD_FILE_BY", "user2", "/file.med", "40"], ["COPY_FILE", "/file.med", "/dir/another/file.med"], ["COPY_FILE", "/file.med", "/file.small", "10"], ["ADD_FILE_BY", "admin", "/dir/file_small", "5"], ["ADD_FILE_BY", "user1", "/my_folder/file.huge", "100"], ["ADD_FILE_BY", "user3", "/my_folder/file.huge", "100"], ["UPDATE_CAPACITY", "user1", "300"], ["UPDATE_CAPACITY", "user1", "50"], ["UPDATE_CAPACITY", "user2", "1000"] ] Explanations: 1. returns `"true"`; creates user `"user1"` with 125 bytes capacity 2. returns `"false"`; `"user1"` already exists 3. returns `"true"`; creates user `"user2"` with 100 bytes capacity 4. returns `"95"` 5. returns `""`; file named `"/file.med"` already exists and owned by `"user1"` 6. returns `"true"`; copying preserves the file owner. After copying, `"user1"` has 15 capacity left 7. returns `"false"`; `"user1"` does not have enough storage capacity left to perform copying operation 8. returns `"true"`; this operation is done by `"admin"` with unlimited capacity 9. returns `"false"`; `"user1"` does not have enough storage capacity left to add this file 10. returns `""`; `"user3"` doesn't exist 11. returns `"0"`; all files owned by `"user1"` can fit into the new capacity of 300 bytes 12. returns `"2"`; the files `"/dir/file.big"` and `"/dir/another/file.med"` should be deleted so the remaining files owned by `"user1"` can fit into the new capacity of 50 bytes 13. returns `""`; `"user2"` doesn't exist The output should be: ["true", "false", "true", "95", "", "true", "false", "true", "false", "", "0", "2", ""] Level 4 Implement support for file compression. COMPRESS_FILE <userId> <name> Should compress the file name if it belongs to userId. The compressed file should be replaced with a new file named <name>.COMPRESSED. The size of the newly created file should be equal to half of the original file. The size of all files is guaranteed to be even, so there should be no fractional calculations. It is also guaranteed that name for this operation never points to a compressed file (i.e., it never ends with .COMPRESSED). Compressed files should be owned by userId — the owner of the original file. Returns a string representing the remaining storage capacity for userId if the file was compressed successfully or an empty string otherwise. Note Because file names can only contain lowercase letters, compressed files cannot be added via ADD_FILE. It is guaranteed that all COPY_FILE operations will preserve the suffix .COMPRESSED. DECOMPRESS_FILE <userId> <name> Should revert the compression of the file name if it belongs to userId. It is guaranteed that name for this operation always ends with .COMPRESSED. If decompression results in the userId exceeding their storage capacity limit ora decompressed version of the file with the given name already exists, the operation fails. Returns a string representing the remaining capacity of userId if the file was decompressed successfully or an empty string otherwise. Examples The example below shows how these operations should work (the section is scrollable to the right): queries = [ ["ADD_USER", "user1", "1000"], ["ADD_USER", "user2", "5000"], ["ADD_FILE_BY", "user1", "/dir/file.mp4", "500"], ["ADD_FILE_BY", "user2", "/dir/file.mp4", "1"], ["COMPRESS_FILE", "user3", "/dir/file.mp4"], ["COMPRESS_FILE", "user1", "/folder/non_existing_file"], ["COMPRESS_FILE", "user1", "/dir/file.mp4"], ["COMPRESS_FILE", "user1", "/dir/file.mp4"], ["GET_FILE_SIZE", "/dir/file.mp4.COMPRESSED"], ["GET_FILE_SIZE", "/dir/file.mp4"], ["COPY_FILE", "/dir/file.mp4.COMPRESSED", "/file.mp4.COMPRESSED"], ["ADD_FILE_BY", "user1", "/dir/file.mp4", "500"], ["DECOMPRESS_FILE", "user1", "/dir/file.mp4.COMPRESSED"], ["UPDATE_CAPACITY", "user1", "2000"], ["DECOMPRESS_FILE", "user2", "/dir/file.mp4.COMPRESSED"], ["DECOMPRESS_FILE", "user3", "/dir/file.mp4.COMPRESSED"], ["DECOMPRESS_FILE", "user1", "/dir/file.mp4.COMPRESSED"] ] Explanations: 1. returns `"true"` 2. returns `"true"` 3. returns `"500"` 4. returns `""`; the file `"/dir/file.mp4"` is owned by `"user1"` 5. returns `""`; `"user3"` doesn’t exist 6. returns `""`; the file `"/folder/non_existing_file"` doesn’t exist 7. returns `"750"`; the file `"/dir/file.mp4"` is compressed to size = 500 / 2 = 250 bytes 8. returns `""` 9. returns `"250"` 10. returns `""` 11. returns `"true"` 12. returns `"0"`; `"user1"` does not have enough storage capacity to decompress the file 13. returns `"0"` 14. returns `"true"`; the file `"/dir/file.mp4.COMPRESSED"` is owned by `"user1"` 15. returns `""`; `"user3"` doesn’t exist 16. returns `""`; the file `"/dir/file.mp4"` exists already 17. returns `"750"` The output should be: ["true", "true", "500", "", "", "", "750", "", "250", "", "true", "0", "0", "true", "", "", "750"] Anthropic Online Assessment -- In Memory Database Requirements Your task is to implement a simplified version of an in-memory database. Plan your design according to the level specifications below: Level 1: In-memory database should support basic operations to manipulate records, fields, and values within fields. Level 2: In-memory database should support displaying a specific record's fields based on a filter. Level 3: In-memory database should support TTL (Time-To-Live) configurations on database records. Level 4: In-memory database should support backup and restore functionality. To move to the next level, you need to pass all the tests at this level. Note You will receive a list of queries to the system, and the final output should be an array of strings representing the returned values of all queries. Each query will only call one operation. Level 1 The basic level of the in-memory database contains records. Each record can be accessed with a unique identifier key of string type. A record may contain several field-value pairs, both of which are of string type. SET <key> <field> <value> — should insert a field-value pair to the record associated with key. If the field in the record already exists, replace the existing value with the specified value. If the record does not exist, create a new one. This operation should return an empty string. GET <key> <field> — should return the value contained within field of the record associated with key. If the record or the field doesn't exist, should return an empty string. DELETE <key> <field> — should remove the field from the record associated with key. Returns if the field was successfully deleted, and "false" if the key or the field do not exist in the database. Examples The example below shows how these operations should work: Queries queries = [ ["SET", "A", "B", "E"], ["SET", "A", "C", "F"], ["GET", "A", "B"], ["GET", "A", "D"], ["DELETE", "A", "B"], ["DELETE", "A", "D"] ] Explanations returns ""; database state: {"A": {"B": "E"}} returns ""; database state: {"A": {"C": "F", "B":"E"}} returns "E" returns "" returns "true"; database state: {"A": {"C": "F"}} returns "false"; database state: {"A": {"C": "F"}} Level 2 The database should support displaying data based on filters. Introduce an operation to support printing some fields of a record. SCAN <key> — should return a string representing the fields of a record associated with key. The returned string should be in the following format "<field1>(<value1>), <field2>(<value2>), ...", where fields are sorted lexicographically. If the specified record does not exist, returns an empty string. SCAN_BY_PREFIX <key> <prefix> — should return a string representing some fields of a record associated with key. Specifically, only fields that start with prefix should be included. The returned string should be in the same format as in the SCAN operation with fields sorted in lexicographical order. Examples The example below shows how these operations should work Queries queries = [ ["SET", "A", "BC", "E"], ["SET", "A", "BD", "F"], ["SET", "A", "C", "G"], ["SCAN_BY_PREFIX", "A", "B"], ["SCAN", "A"], ["SCAN_BY_PREFIX", "B", "B"] ] Explanations returns ""; database state: {"A": {"BC": "E"}} returns ""; database state: {"A": {"BC": "E", "BD": "F"}} returns ""; database state: {"A": {"BC": "E", "BD": "F", "C": "G"}} returns "BC(E), BD(F)" returns "BC(E), BD(F), C(G)" returns "" the output should be ["", "", "", "BC(E), BD(F)", "BC(E), BD(F), C(G)", ""]. Level 3 Support the timeline of operations and TTL (Time-To-Live) settings for records and fields. Each operation from previous levels now has an alternative version with a timestamp parameter to represent when the operation was executed. For each field-value pair in the database, the TTL determines how long that value will persist before being removed. Notes: Time should always flow forward, so timestamps are guaranteed to strictly increase as operations are executed. Each test cannot contain both versions of operations (with and without timestamp). However, you should maintain backward compatibility, so all previously defined methods should work in the same way as before. SET_AT <key> <field> <value> <timestamp> — should insert a field-value pair or updates the value of the field in the record associated with key. This operation should return an empty string. SET_AT_WITH_TTL <key> <field> <value> <timestamp> <ttl> — should insert a field-value pair or update the value of the field in the record associated with key. Also sets its Time-To-Live starting at timestamp to be ttl. The ttl is the amount of time that this field-value pair should exist in the database, meaning it will be available during this interval: [timestamp, timestamp + ttl). This operation should return an empty string. DELETE_AT <key> <field> <timestamp> — the same as DELETE, but with timestamp of the operation specified. Should return "true" if the field existed and was successfully deleted and "false" if the key didn't exist. GET_AT <key> <field> <timestamp> — the same as GET, but with timestamp of the operation specified. SCAN_AT <key> <timestamp> — the same as SCAN, but with timestamp of the operation specified. SCAN_BY_PREFIX_AT <key> <prefix> <timestamp> — the same as SCAN_BY_PREFIX, but with timestamp of the operation specified. Examples The examples below show how these operations should work Queries queries = [ ["SET_AT_WITH_TTL", "A", "BC", "E", "1", "9"], ["SET_AT_WITH_TTL", "A", "BC", "E", "5", "10"], ["SET_AT", "A", "BD", "F", "5"], ["SCAN_BY_PREFIX_AT", "A", "B", "14"], ["SCAN_BY_PREFIX_AT", "A", "B", "15"] ] Explanations returns ""; database state: {"A": {"BC": "E"}} where {"BC": "E"} expires at timestamp 10 returns ""; database state: {"A": {"BC": "E"}} as field "BC" in record "A" already exists, it was overwritten, and {"BC": "E"} now expires at timestamp 15 returns ""; database state: {"A": {"BC": E", "BD": "F"}} where {"BD": "F"} does not expire returns "BC(E), BD(F)" returns "BD(F)" the output should be ["", "", "", "BC(E), BD(F)", "BD(F)"]. Example2 Queries queries = [ ["SET_AT", "A", "B", "C", "1"], ["SET_AT_WITH_TTL", "X", "Y", "Z", "2", "15"], ["GET_AT", "X", "Y", "3"], ["SET_AT_WITH_TTL", "A", "D", "E", "4", "10"], ["SCAN_AT", "A", "13"], ["SCAN_AT", "X", "16"], ["SCAN_AT", "X", "17"], ["DELETE_AT", "X", "Y", "20"] ] Explanations returns ""; database state: {"A": {"B": "C"}} returns ""; database state: {"X": {"Y": "Z"}, "A": {"B": "C"}} where {"Y": "Z"} expires at timestamp 17 returns "Z" returns ""; database state: {"X": {"Y": "Z"}, "A": {"D": "E", "B": "C"}} where {"D": "E"} expires at timestamp 14 and {"Y": "Z"} expires at timestamp 17 returns "B(C), D(E)" returns "Y(Z)" returns ""; Note that all fields in record "X" have expired returns "false"; the record "X" was expired at timestamp 17 and can't be deleted. the output should be ["", "", "Z", "", "B(C), D(E)", "Y(Z)", "", "false"]. Level 4 The database should be backed up from time to time. Introduce operations to support backing up and restoring the database state based on timestamps. When restoring, ttl expiration times should be recalculated accordingly. BACKUP <timestamp> — should save the database state at the specified timestamp, including the remaining ttl for all records and fields. Remaining ttl is the difference between their initial ttl and their current lifespan (the duration between the timestamp of this operation and their initial timestamp). Returns a string representing the number of non-empty non-expired records in the database. RESTORE <timestamp> <timestampToRestore> — should restore the database from the latest backup before or at timestampToRestore. It's guaranteed that a backup before or at timestampToRestore will exist. Expiration times for restored records and fields should be recalculated according to the timestamp of this operation - since the database timeline always flows forward, restored records and fields should expire after the timestamp of this operation, depending on their remaining ttls at backup. This operation should return an empty string. Examples Queries queries = [ ["SET_AT_WITH_TTL", "A", "B", "C", "1", "10"], ["BACKUP", "3"], ["SET_AT", "A", "D", "E", "4"], ["BACKUP", "5"], ["DELETE_AT", "A", "B", "8"], ["BACKUP", "9"], ["RESTORE", "10", "7"], ["BACKUP", "11"], ["SCAN_AT", "A", "15"], ["SCAN_AT", "A", "16"] ] Explanations returns ""; database state: {"A": {"B": "C"}} with lifespan `[1, 11)`, meaning that the record should be deleted at timestamp = 11. returns "1"; saves the database state returns ""; database state: {"A": {"D": "E", "B": "C"}} returns "1"; saves the database state returns "true"; database state: {"A": {"D": "E"}} returns "1"; saves the database state returns ""; restores the database to state of last backup at timestamp = 5: {"A": {"D": "E", "B": "C"}} with {"B": "C"} expiring at timestamp = 16: Since the initial ttl of the field is 10 and the database was restored to the state at timestamp = 5; this field has had a lifespan of 4 and a remaining ttl of 6, so it will now expire at timestamp = 10 + 6 = 16. returns "1"; saves the database state returns "B(C), D(E)" returns "D(E)" the output should be ["", "1", "", "1", "true", "1", "", "1", "B(C), D(E)", "D(E)"]. </examples>
下面给出一个通用的解题套路(SOP),适用于这类分级、分模块的编程题目,让你在90分钟内高效应对。这个套路的核心在于“自上而下的需求分析—设计数据结构—模块化实现—增量扩展—边写边测试”,具体步骤如下:
整体概览
列出所有操作
确定核心数据模型
accounts = { account_id: {balance: ..., history: [...] } }
。辅助结构
设计模块边界
主函数结构
solve(queries)
函数,输入为查询列表,输出为每个查询的返回结果组成的数组。命令分发
pythonif cmd == "CREATE_ACCOUNT": result = create_account(args) elif cmd == "DEPOSIT": result = deposit(args) # … 其他命令 results.append(result)
增量实现
输入验证与异常处理
"ACCOUNT_NOT_FOUND"
、"false"
等)。格式化输出
"field1(value1), field2(value2)"
或 "SUCCESS(10.0)"
),务必按照题目要求排序、加括号、逗号分隔等。时间分配
测试用例验证
逐步完善、及时提交
下面给出一个伪代码模板,供你在各类题目中参考调整:
pythondef solve(queries): # 初始化全局数据结构(根据题目,可能包括 accounts, files, db_records, backup_snapshots 等) global_data = { "accounts": {}, "files": {}, "db": {}, "backups": [] } results = [] # 主循环:遍历所有查询 for query in queries: cmd = query[0] if cmd == "CREATE_ACCOUNT": # 例如:["CREATE_ACCOUNT", account_id, initial_balance] result = handle_create_account(query[1:], global_data) elif cmd == "DEPOSIT": result = handle_deposit(query[1:], global_data) elif cmd == "WITHDRAW": result = handle_withdraw(query[1:], global_data) elif cmd == "CHECK_BALANCE": result = handle_check_balance(query[1:], global_data) # 处理 Level 2 的操作,例如 SET_CASHBACK_RATE, MAKE_PURCHASE, 等 elif cmd == "SET_CASHBACK_RATE": result = handle_set_cashback_rate(query[1:], global_data) elif cmd == "MAKE_PURCHASE": result = handle_make_purchase(query[1:], global_data) # 处理 Level 3 的操作,例如 ADD_PAYMENT_METHOD, SMART_PAY, 等 elif cmd == "ADD_PAYMENT_METHOD": result = handle_add_payment_method(query[1:], global_data) elif cmd == "SMART_PAY": result = handle_smart_pay(query[1:], global_data) # 处理 Level 4 的操作,例如 LINK_ACCOUNTS, GET_SPENDING_PATTERN, 等 elif cmd == "LINK_ACCOUNTS": result = handle_link_accounts(query[1:], global_data) # … 其他命令处理 else: result = "" # 未知命令的默认返回 results.append(result) return results # 示例:处理 CREATE_ACCOUNT 操作 def handle_create_account(args, data): account_id, initial_balance_str = args initial_balance = int(initial_balance_str) if account_id in data["accounts"]: return "ACCOUNT_EXISTS" # 或其他题目要求的返回 # 创建账户时,存入基本信息,如余额、历史记录、支付方式、交易记录等 data["accounts"][account_id] = { "balance": initial_balance, "history": [], # 根据需要扩展字段 } return "CREATED" # 其他 handle_xxx 函数类似编写,每个函数负责单一操作
这个 SOP 模板的思路就是:
按照这个模板,可以快速构思解题思路、分层实现代码,并在规定时间内逐步完成所有测试用例。希望这个模版能帮助你在考试时高效作答!