摘要:// 文件切割+内存排序public class ChunkSorter { private static final int MAX_CHUNK_SIZE = 8_000_000; // 每块存800万ID void splitAndSort(Path in
当1G内存遭遇500G数据文件时,传统排序算法就像试图用汤勺舀干大海。此时外部排序是唯一出路,其核心如同老北京胡同里的分糖葫芦——化整为零、逐个击破 。
核心三步诀:
// 文件切割+内存排序public class ChunkSorter { private static final int MAX_CHUNK_SIZE = 8_000_000; // 每块存800万ID void splitAndSort(Path input) throws IOException { try(BufferedReader reader = Files.newBufferedReader(input)) { List buffer = new ArrayList(MAX_CHUNK_SIZE); String line; int chunkIndex = 0; while ((line = reader.readLine) != null) { buffer.add(Long.parseLong(line)); if (buffer.size >= MAX_CHUNK_SIZE) { processChunk(buffer, chunkIndex++); buffer.clear; } } if (!buffer.isEmpty) processChunk(buffer, chunkIndex); } } private void processChunk(List data, int index) { Collections.sort(data); // 快速排序炼化数据 writeToTempFile(data, "chunk_"+index+".tmp"); }}// 优先队列实现高效归并class MergeEngine { void mergeFiles(List chunks, Path output) throws IOException { PriorityQueue minHeap = new PriorityQueue( Comparator.comparingLong(FileEntry::currentValue) ); // 初始化各文件读取器 List readers = chunks.stream .map(p -> Files.newBufferedReader(p)) .collect(Collectors.toList); // 装载首元素建堆 readers.forEach(reader -> { String firstLine = reader.readLine; if (firstLine != null) { minHeap.add(new FileEntry( Long.parseLong(firstLine), reader )); } }); try(BufferedWriter writer = Files.newBufferedWriter(output)) { while (!minHeap.isEmpty) { FileEntry entry = minHeap.poll; writer.write(entry.currentValue.toString); writer.newLine; String nextLine = entry.reader.readLine; if (nextLine != null) { minHeap.offer(new FileEntry( Long.parseLong(nextLine), entry.reader )); } } } }}// 文件条目包装类record FileEntry(Long currentValue, BufferedReader reader) {}当临时文件超过1000个时,采用二阶段归并策略:
去年帮某物流公司处理过类似需求,当时没注意字符编码问题,导致部分ID读取错误。后来用Charset.forName("UTF-8")显式指定编码,才避免数据错乱——这教训就像吃火锅忘关火,汤烧干了才长记性。
阶段耗时内存峰值分块2.5h980MB归并1.8h820MB面对海量数据排序,既要像诸葛亮的谋略——分而治之,又要像张飞的勇猛——直面性能瓶颈。记住:没有解决不了的问题,只有没找对的方法。下次遇到数据难题时,不妨泡壶铁观音,让分治算法在茶香中起舞吧!
来源:电脑技术汇