当前位置: 首页 > news >正文

大文件断点续传解决方案:基于Vue 2与Spring Boot的完整实现

大文件断点续传解决方案:基于Vue 2与Spring Boot的完整实现

在这里插入图片描述

在现代Web应用中,大文件上传是一个常见但具有挑战性的需求。传统的文件上传方式在面对网络不稳定、大文件传输时往往表现不佳。本文将详细介绍如何实现一个支持断点续传的大文件上传功能,结合Vue 2前端和Spring Boot后端技术。

一、问题背景与挑战

在实际项目中,我们经常遇到需要上传大文件的场景,如视频、设计图纸、数据库备份等。这些文件可能达到几个GB甚至更大,直接使用传统表单上传方式会面临以下问题:

  1. 网络不稳定:上传过程中网络中断导致前功尽弃
  2. 服务器压力:大文件上传占用服务器资源时间长
  3. 用户体验:用户无法暂停/继续上传,进度不透明
  4. 重复上传:同一文件多次上传浪费带宽和存储空间

二、解决方案概述

我们的断点续传解决方案基于以下核心技术:

  1. 文件分片:将大文件分割成固定大小的块(如2MB)
  2. 唯一标识:使用MD5哈希值作为文件唯一标识
  3. 分片上传:仅上传服务器缺失的分片
  4. 状态记录:使用Redis记录已上传分片信息
  5. 合并恢复:所有分片上传完成后在服务器端合并

三、系统架构设计

前端架构(Vue 2.6.10)

- 文件选择组件
- MD5计算模块
- 分片管理模块
- 上传控制模块(开始/暂停/继续/取消)
- 进度显示组件

后端架构(Spring Boot)

- 文件状态检查接口
- 分片上传接口
- 分片合并接口
- 上传取消接口
- Redis存储服务
- 定时清理任务

四、核心技术实现

1. 前端核心代码

// 文件分片处理
createFileChunks() {if (!this.file) returnthis.uploadChunks = []const chunkCount = Math.ceil(this.file.size / CHUNK_SIZE)for (let i = 0; i < chunkCount; i++) {const start = i * CHUNK_SIZEconst end = Math.min(start + CHUNK_SIZE, this.file.size)const chunk = this.file.slice(start, end)this.uploadChunks.push({index: i,chunk: chunk,uploaded: this.uploadedChunkIndexes.includes(i),retries: 0})}
}// 带重试机制的分片上传
async uploadChunkWithRetry(chunk, maxRetries = 3) {try {await this.uploadChunk(chunk)chunk.uploaded = truethis.uploadedSize += chunk.chunk.size} catch (error) {chunk.retries++if (chunk.retries <= maxRetries) {await new Promise(resolve => setTimeout(resolve, 1000 * chunk.retries))return this.uploadChunkWithRetry(chunk, maxRetries)} else {throw error}}
}

2. 后端核心代码

2.1 Redis配置类
@Configuration
@EnableCaching
public class RedisConfig {@Beanpublic RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) {RedisTemplate<String, Object> template = new RedisTemplate<>();template.setConnectionFactory(factory);// 使用Jackson2JsonRedisSerializer来序列化和反序列化redis的value值Jackson2JsonRedisSerializer<Object> serializer = new Jackson2JsonRedisSerializer<>(Object.class);ObjectMapper mapper = new ObjectMapper();mapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);mapper.activateDefaultTyping(mapper.getPolymorphicTypeValidator(), ObjectMapper.DefaultTyping.NON_FINAL);serializer.setObjectMapper(mapper);template.setValueSerializer(serializer);template.setKeySerializer(new StringRedisSerializer());template.setHashKeySerializer(new StringRedisSerializer());template.setHashValueSerializer(serializer);template.afterPropertiesSet();return template;}@Beanpublic CacheManager cacheManager(RedisConnectionFactory factory) {RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofHours(2)) // 设置缓存有效期2小时.disableCachingNullValues();return RedisCacheManager.builder(factory).cacheDefaults(config).build();}
}
2.2 实体类
@Data   
public class CancelUploadRequest {private String md5;
}@Data   
public class CancelUploadResponse {private boolean success;private String message;
}@Data
public class CheckFileResponse {private boolean uploaded;private List<Integer> uploadedChunks;
}@Data
public class MergeChunksRequest {private String md5;private String fileName;private int totalChunks;private long fileSize;
}@Data
public class MergeChunksResponse {private boolean success;private String message;private String filePath;
}@Data
public class UploadChunkResponse {private boolean success;private String message;
}
2.3 核心实现
@Slf4j
@Service
public class FileUploadService {@Value("${file.upload.chunk-dir:/tmp/chunks/}")private String chunkDir;@Value("${file.upload.final-dir:/tmp/uploads/}")private String finalDir;@Autowiredprivate RedisTemplate<String, Object> redisTemplate;// Redis键前缀private static final String UPLOAD_CHUNKS_KEY_PREFIX = "upload:chunks:";private static final String UPLOAD_INFO_KEY_PREFIX = "upload:info:";/*** 检查文件状态*/public CheckFileResponse checkFile(String md5, String fileName, long fileSize) {CheckFileResponse response = new CheckFileResponse();// 检查是否已存在完整文件File finalFile = new File(finalDir + md5 + "/" + fileName);if (finalFile.exists() && finalFile.length() == fileSize) {response.setUploaded(true);return response;}// 从Redis获取已上传的分片信息Set<Object> uploadedChunks = redisTemplate.opsForSet().members(UPLOAD_CHUNKS_KEY_PREFIX + md5);if (uploadedChunks != null && !uploadedChunks.isEmpty()) {List<Integer> chunks = uploadedChunks.stream().map(obj -> Integer.parseInt(obj.toString())).sorted().collect(Collectors.toList());response.setUploadedChunks(chunks);} else {// Redis中没有记录,检查磁盘上的分片文件File chunkFolder = new File(chunkDir + md5);if (chunkFolder.exists() && chunkFolder.isDirectory()) {List<Integer> uploadedChunksList = Arrays.stream(chunkFolder.listFiles()).filter(File::isFile).map(f -> {try {return Integer.parseInt(f.getName());} catch (NumberFormatException e) {return -1;}}).filter(i -> i >= 0).sorted().collect(Collectors.toList());response.setUploadedChunks(uploadedChunksList);// 将分片信息保存到Redisif (!uploadedChunksList.isEmpty()) {String[] chunksArray = uploadedChunksList.stream().map(String::valueOf).toArray(String[]::new);redisTemplate.opsForSet().add(UPLOAD_CHUNKS_KEY_PREFIX + md5, chunksArray);// 设置24小时过期时间redisTemplate.expire(UPLOAD_CHUNKS_KEY_PREFIX + md5, Duration.ofHours(24));}}}// 保存上传文件信息到Redisif (fileSize > 0) {Map<String, Object> fileInfo = new HashMap<>();fileInfo.put("fileName", fileName);fileInfo.put("fileSize", fileSize);fileInfo.put("totalChunks", (int) Math.ceil((double) fileSize / (2 * 1024 * 1024)));fileInfo.put("lastUpdate", System.currentTimeMillis());redisTemplate.opsForHash().putAll(UPLOAD_INFO_KEY_PREFIX + md5, fileInfo);redisTemplate.expire(UPLOAD_INFO_KEY_PREFIX + md5, Duration.ofHours(24));}return response;}/*** 上传分片*/public UploadChunkResponse uploadChunk(MultipartFile file, int chunkIndex,int totalChunks, String md5, String fileName, long fileSize) {UploadChunkResponse response = new UploadChunkResponse();try {// 创建分片存储目录File chunkFolder = new File(chunkDir + md5);if (!chunkFolder.exists()) {chunkFolder.mkdirs();}// 保存分片文件File chunkFile = new File(chunkFolder, String.valueOf(chunkIndex));file.transferTo(chunkFile);// 将分片信息保存到RedisredisTemplate.opsForSet().add(UPLOAD_CHUNKS_KEY_PREFIX + md5, String.valueOf(chunkIndex));// 更新过期时间redisTemplate.expire(UPLOAD_CHUNKS_KEY_PREFIX + md5, Duration.ofHours(24));// 更新文件信息if (fileSize > 0) {Map<String, Object> fileInfo = new HashMap<>();fileInfo.put("fileName", fileName);fileInfo.put("fileSize", fileSize);fileInfo.put("totalChunks", totalChunks);fileInfo.put("lastUpdate", System.currentTimeMillis());redisTemplate.opsForHash().putAll(UPLOAD_INFO_KEY_PREFIX + md5, fileInfo);redisTemplate.expire(UPLOAD_INFO_KEY_PREFIX + md5, Duration.ofHours(24));}response.setSuccess(true);response.setMessage("分片上传成功");} catch (IOException e) {response.setSuccess(false);response.setMessage("分片上传失败: " + e.getMessage());}return response;}/*** 合并分片*/public MergeChunksResponse mergeChunks(String md5, String fileName, int totalChunks, long fileSize) {MergeChunksResponse response = new MergeChunksResponse();try {File chunkFolder = new File(chunkDir + md5);if (!chunkFolder.exists()) {response.setSuccess(false);response.setMessage("分片文件夹不存在");return response;}// 检查分片是否完整File[] chunkFiles = chunkFolder.listFiles();if (chunkFiles == null || chunkFiles.length < totalChunks) {response.setSuccess(false);response.setMessage("分片不完整,无法合并");return response;}// 创建最终文件File finalFile = new File(finalDir + md5 + "/" + fileName);File finalDirFile = finalFile.getParentFile();if (!finalDirFile.exists()) {finalDirFile.mkdirs();}// 合并所有分片try (RandomAccessFile randomAccessFile = new RandomAccessFile(finalFile, "rw")) {byte[] buffer = new byte[1024 * 1024]; // 1MB缓冲区int bytesRead;for (int i = 0; i < totalChunks; i++) {File chunkFile = new File(chunkFolder, String.valueOf(i));if (!chunkFile.exists()) {response.setSuccess(false);response.setMessage("分片 " + i + " 不存在");return response;}try (FileInputStream fis = new FileInputStream(chunkFile)) {while ((bytesRead = fis.read(buffer)) != -1) {randomAccessFile.write(buffer, 0, bytesRead);}}}}// 验证文件大小if (finalFile.length() != fileSize) {response.setSuccess(false);response.setMessage("合并后的文件大小不匹配");finalFile.delete();return response;}// 删除分片临时文件和Redis记录deleteFolder(chunkFolder);redisTemplate.delete(UPLOAD_CHUNKS_KEY_PREFIX + md5);redisTemplate.delete(UPLOAD_INFO_KEY_PREFIX + md5);response.setSuccess(true);response.setMessage("文件合并成功");response.setFilePath(finalFile.getAbsolutePath());} catch (IOException e) {response.setSuccess(false);response.setMessage("文件合并失败: " + e.getMessage());}return response;}/*** 取消上传*/public CancelUploadResponse cancelUpload(String md5) {CancelUploadResponse response = new CancelUploadResponse();try {// 删除Redis中的记录redisTemplate.delete(UPLOAD_CHUNKS_KEY_PREFIX + md5);redisTemplate.delete(UPLOAD_INFO_KEY_PREFIX + md5);// 删除分片文件File chunkFolder = new File(chunkDir + md5);if (chunkFolder.exists()) {deleteFolder(chunkFolder);}response.setSuccess(true);response.setMessage("上传已取消");} catch (Exception e) {response.setSuccess(false);response.setMessage("取消上传失败: " + e.getMessage());}return response;}/*** 清理过期上传任务*/@Scheduled(cron = "0 0 2 * * ?") // 每天凌晨2点执行public void cleanupExpiredUploads() {try {// 查找24小时内没有更新的上传任务long twentyFourHoursAgo = System.currentTimeMillis() - (24 * 60 * 60 * 1000);// 获取所有上传信息键Set<String> keys = redisTemplate.keys(UPLOAD_INFO_KEY_PREFIX + "*");if (keys != null) {for (String key : keys) {Long lastUpdate = (Long) redisTemplate.opsForHash().get(key, "lastUpdate");if (lastUpdate != null && lastUpdate < twentyFourHoursAgo) {String md5 = key.substring(UPLOAD_INFO_KEY_PREFIX.length());// 删除Redis记录redisTemplate.delete(key);redisTemplate.delete(UPLOAD_CHUNKS_KEY_PREFIX + md5);// 删除分片文件File chunkFolder = new File(chunkDir + md5);if (chunkFolder.exists()) {deleteFolder(chunkFolder);}}}}} catch (Exception e) {log.error("清理过期上传任务失败", e);}}/*** 删除文件夹*/private void deleteFolder(File folder) {if (folder.isDirectory()) {File[] files = folder.listFiles();if (files != null) {for (File file : files) {deleteFolder(file);}}}folder.delete();}
}
// 使用Redis记录分片信息
public CheckFileResponse checkFile(String md5, String fileName, long fileSize) {CheckFileResponse response = new CheckFileResponse();// 从Redis获取已上传的分片信息Set<Object> uploadedChunks = redisTemplate.opsForSet().members(UPLOAD_CHUNKS_KEY_PREFIX + md5);if (uploadedChunks != null && !uploadedChunks.isEmpty()) {List<Integer> chunks = uploadedChunks.stream().map(obj -> Integer.parseInt(obj.toString())).sorted().collect(Collectors.toList());response.setUploadedChunks(chunks);}return response;
}// 分片合并处理
public MergeChunksResponse mergeChunks(String md5, String fileName, int totalChunks, long fileSize) {// 合并所有分片try (RandomAccessFile randomAccessFile = new RandomAccessFile(finalFile, "rw")) {byte[] buffer = new byte[1024 * 1024];int bytesRead;for (int i = 0; i < totalChunks; i++) {File chunkFile = new File(chunkFolder, String.valueOf(i));try (FileInputStream fis = new FileInputStream(chunkFile)) {while ((bytesRead = fis.read(buffer)) != -1) {randomAccessFile.write(buffer, 0, bytesRead);}}}}// 清理Redis记录和临时文件redisTemplate.delete(UPLOAD_CHUNKS_KEY_PREFIX + md5);redisTemplate.delete(UPLOAD_INFO_KEY_PREFIX + md5);deleteFolder(chunkFolder);
}
2.4 接口控制台
@RestController
@RequestMapping("/api/upload")
public class FileUploadController {@Autowiredprivate FileUploadService fileUploadService;@GetMapping("/check")public ResponseEntity<CheckFileResponse> checkFile(@RequestParam String md5,@RequestParam String fileName,@RequestParam(required = false) Long size) {CheckFileResponse response = fileUploadService.checkFile(md5, fileName, size != null ? size : 0);return ResponseEntity.ok(response);}@PostMapping("/chunk")public ResponseEntity<UploadChunkResponse> uploadChunk(@RequestParam("file") MultipartFile file,@RequestParam int chunkIndex,@RequestParam int totalChunks,@RequestParam String md5,@RequestParam String fileName,@RequestParam(required = false) Long fileSize) {UploadChunkResponse response = fileUploadService.uploadChunk(file, chunkIndex, totalChunks, md5, fileName, fileSize != null ? fileSize : 0);return ResponseEntity.ok(response);}@PostMapping("/merge")public ResponseEntity<MergeChunksResponse> mergeChunks(@RequestBody MergeChunksRequest request) {MergeChunksResponse response = fileUploadService.mergeChunks(request.getMd5(), request.getFileName(), request.getTotalChunks(),request.getFileSize());return ResponseEntity.ok(response);}@PostMapping("/cancel")public ResponseEntity<CancelUploadResponse> cancelUpload(@RequestBody CancelUploadRequest request) {CancelUploadResponse response = fileUploadService.cancelUpload(request.getMd5());return ResponseEntity.ok(response);}
}
2.5 配置建议

在application.properties中添加配置:

file:upload:chunk-dir: /tmp/chunks/final-dir: /tmp/uploads/
spring:servlet:multipart:max-file-size: 10GBmax-request-size: 10GB

五、性能优化策略

1. Redis优化

  • 使用Set结构存储分片索引,快速查询已上传分片
  • 设置24小时过期时间,自动清理未完成的上传任务
  • 使用Hash结构存储文件元信息

2. 并发上传控制

// 限制最大并发数,避免浏览器资源耗尽
const MAX_CONCURRENT_UPLOADS = 3;// 控制并发上传
for (let i = 0; i < chunksToUpload.length; i++) {const chunk = chunksToUpload[i];if (activeUploads.length >= MAX_CONCURRENT_UPLOADS) {await Promise.race(activeUploads);}const uploadPromise = this.uploadChunkWithRetry(chunk);activeUploads.push(uploadPromise);
}

3. 智能重试机制

  • 指数退避重试策略
  • 最大重试次数限制
  • 网络异常自动重试

六、实践建议

  1. 分片大小选择:根据网络环境和文件大小调整,通常2-5MB为宜
  2. MD5计算优化:超大文件可使用Web Worker避免界面卡顿
  3. 内存管理:及时释放已上传分片的Blob对象
  4. 安全考虑:添加文件类型校验、大小限制和权限控制
  5. 监控日志:记录上传成功率、耗时等指标用于优化

八、总结

本文介绍了一套完整的大文件断点续传解决方案,结合Vue 2前端和Spring Boot后端技术,通过文件分片、Redis状态管理和智能重试机制,有效解决了大文件上传的痛点问题。该方案具有以下特点:

  1. 可靠性高:断点续传和重试机制保证上传成功率
  2. 性能优异:并发上传和Redis缓存提升效率
  3. 用户体验好:实时进度反馈和操作控制
  4. 易于扩展:模块化设计便于功能扩展和定制

这套方案已在实际项目中验证,能够稳定支持GB级文件上传,为类似需求提供了可靠的技术参考。开发者可以根据具体业务场景调整参数和功能,实现最佳的上传体验。


文章转载自:

http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://00000000.cmrfL.cn
http://www.dtcms.com/a/367779.html

相关文章:

  • 商城系统——项目测试
  • Ubuntu镜像源配置
  • 【C语言】第二课 基础语法
  • 机器学习基础-day07-项目案例
  • 无开机广告,追觅一口气推出三大系列高端影音新品该咋看?
  • Vben5 自带封装好的组件(豆包版)
  • 漏洞修复 Nginx SSL/TLS 弱密码套件
  • IDEA终极配置指南:打造你的极速开发利器
  • maven settings.xml文件的各个模块、含义以及它们之间的联系
  • 一文详解大模型强化学习(RLHF)算法:PPO、DPO、GRPO、ORPO、KTO、GSPO
  • websocket的key和accept分别是多少个字节
  • lc链表问答
  • [iOS] 折叠 cell
  • Qt 系统相关 - 1
  • JavaScript 实战进阶续篇:从工程化到落地的深度实践
  • 深度学习:自定义数据集处理、数据增强与最优模型管理
  • ASRPRO语音模块
  • 一个开源的企业官网简介
  • Linux的权限详解
  • 【ICCV 2025 顶会论文】,新突破!卷积化自注意力 ConvAttn 模块,即插即用,显著降低计算量和内存开销。
  • HTB Jerry
  • 微信支付--在线支付实战,引入Swagger,定义统一结果,创建并连接数据库
  • 为什么串口发送一串数据时需要延时?
  • 决策树算法详解:从原理到实战
  • 生成式AI优化新纪元:国产首个GEO工具的技术架构剖析
  • 2025年高教社杯全国大学生数学建模竞赛B题思路(2025数学建模国赛B题思路)
  • 【C语言】第一课 环境配置
  • git命令行打patch
  • day2today3夏暮客的Python之路
  • 随时学英语5 逛生活超市