cv
my PDF version of curriculum vitae and be found at the right side of this page, just click the PDF icon →
Basics
| Name | Jiefeng Zhou |
| Role | Senior Student |
| jiefeng.zhou@hotmail.com | |
| Phone | (+86)-159-2802-0298 |
| Url | https://github.com/JeffeyChou |
Education
-
2021.09 - 2025.07 Chengdu, China
B.S. | GPA: 3.87/4.0
University of Electronic Science and Technology of China (UESTC, 985 project)
Mathematics-Physics Fundamental Science (Elite Program@Yingcai Honors College)
- Mathematics: Calculus, Algebra and Geometry, Numerical Analysis, Time Series Analysis, Stochastic Processes, Probability and Statistics
- Physics: University Physics, Physical Experiment, Theoretical Mechanics, Quantum Mechanics
- Computer Science: Data Structure and Algorithm, Machine Learning, Deep Learning, Reinforcement Learning
Publications
-
Accepted Limit of the Maximum Random Permutation Set Entropy
Physica A (IF 2.8, JCR Q2)
Jiefeng Zhou, Zhen Li, Kang Hao Cheong, and Yong Deng
-
2024 An Improved Information Volume of Mass Function Based on Plausibility Transformation Method
Expert Systems with Applications (IF7.5, JCR Q1)
Jiefeng Zhou, Zhen Li, and Yong Deng
-
2024 Generalized Information Entropy and Generalized Information Dimension
Chaos, Solitons & Fractals (IF5.3, JCR Q1)
Tianxiang Zhan, Jiefeng Zhou, Zhen Li, and Yong Deng
-
2024 Random Walk in Random Permutation Set Theory
Chaos: An Interdisciplinary Journal of Nonlinear Science (IF 2.7 JCR Q1)
Jiefeng Zhou, Zhen Li and Yong Deng
Awards
- 2024, 2023, 2022
National Grants * 3
Sichuan Provincial Department of Education (National Level, Top 20%)
- 2024.11.01
Dream Scholarship
China Education Development Foundation (National Level, Top 20%)
- 2023, 2022
Model Student Scholarship * 2
UESTC (University Level, Top 30%)
- 2023.12.01
Top 7% | The 17th IEEExtreme Programming Competition
IEEExtreme Programming Competition (Global Level)
- 2023.05.01
First Prize | The 23rd Mathematical Modeling Competition of UESTC
UESTC (University Level, TOP 5%)
- 2023.04.01
First Prize | Sichuan College Students' Information Literacy Competition
Sichuan Provincial Department of Education (Provincial level, TOP 1%)
Skills
| Programming Languages | |
| Python (3+ yrs) | |
| C (3+ yrs) | |
| MATLAB (2+ yrs) |
| Software & Tools | |
| Unix (1+ yr) | |
| Git (2+ yr) | |
| MS Office (4+ yrs) | |
| PyTorch (2+ yrs) | |
| TensorFlow (1+ yr) | |
| LaTeX (2+ yrs) |
Languages
| Mandarin | |
| Native speaker |
| Cantonese | |
| Proficient speaker |
| English | |
| Proficient speaker (TOEFL: 102) |
Volunteer
-
2023.12 - 2023.12 Elsevier
Peer Reviewer
Expert Systems with Applications (Journal)
Conducted peer review for one publication/grant for Expert Systems with Applications journal.
- Reviewed manuscript for Expert Systems with Applications journal.
References
| Yong Deng | |
| dengentropy at uestc dot edu dot cn, Full professor, University of Electronic Science and Technology of China |
| Kang Hao Cheong | |
| kanghao.cheong at ntu dot edu dot sg, Associate Professor, Nanyang Technological University |
| Yue Dong | |
| yue.dong at ucr dot edu, Assistant Professor, University of California, Riverside |
| Sheng Wang | |
| wsh_keylab at uestc dot edu dot cn, Professor, University of Electronic Science and Technology of China |
Projects
- 2024.07 - Present
Natural Language Processing: Development of Large-Scale Language Model Datasets (RA at UCR)
Developed a benchmark for Large Language Models (LLMs) to test their reasoning capabilities and instruction-following ability under conflict information. Proposed information hierarchy within instructions, verified by ablation study. Designed scoring prompts and collected responses from different LLM models, conducting empirical study on the results.
- Developed a benchmark for Large Language Models (LLMs) to test their reasoning capabilities and instruction-following ability under conflict information.
- Proposed information hierarchy within instructions. Our ablation study verified its effectiveness.
- Designed scoring prompts and collected responses from different LLM models, conducted an empirical study on the results.
- 2024.03 - 2024.05
Random Walk in Random Permutation Set Theory (Lead)
Established a link between Random Permutation Set Theory (RPST) and random walks, enhancing understanding in both fields. Developed a random walk model derived from RPST, showing similarities to Gaussian random walks. Demonstrated that this random walk can be transformed into a Wiener process through a specific scaling procedure.
- Established a link between Random Permutation Set Theory (RPST) and random walks, enhancing understanding in both fields.
- Developed a random walk model derived from RPST, showing similarities to Gaussian random walks.
- Demonstrated that this random walk can be transformed into a Wiener process through a specific scaling procedure.
- 2024.01 - 2024.03
Generalized Information Entropy and Generalized Information Dimension (Core member)
Proposed a Generalized Information Entropy (GIE) to unify various forms of entropy and established the relationship between entropy, fractal dimension, and number of events. Developed a Generalized Information Dimension (GID) to extend the definition of information dimension from probability to mass fusion. Analyzed the role of GIE in approximation calculation and coding systems, demonstrating the particle nature of information similar to Boltzmann entropy.
- Proposed a Generalized Information Entropy (GIE) to unify various forms of entropy and established the relationship between entropy, fractal dimension, and number of events.
- Developed a Generalized Information Dimension (GID) to extend the definition of information dimension from probability to mass fusion.
- Analyzed the role of GIE in approximation calculation and coding systems, demonstrating the particle nature of information similar to Boltzmann entropy.
- 2023.10 - 2024.01
Limit of the Maximum Random Permutation Set Entropy (Lead)
Proposed the Random Permutation Set (RPS) as a generalization of evidence theory and defined the concept of the envelope of entropy function. Derived and proved the limit of the envelope of RPS entropy, demonstrating significant reduction in computational complexity. Validated the efficiency and conciseness of the proposed envelope through numerical examples, providing new insights into the maximum entropy function.
- Proposed the Random Permutation Set (RPS) as a generalization of evidence theory and defined the concept of the envelope of entropy function.
- Derived and proved the limit of the envelope of RPS entropy, demonstrating significant reduction in computational complexity.
- Validated the efficiency and conciseness of the proposed envelope through numerical examples, providing new insights into the maximum entropy function.
- 2022.12 - 2023.05
An Improved Information Volume of Mass Function (Lead)
Proposed an improved Information Volume of Mass Function (IVMF) based on the Plausibility Transformation Method (PTM) to address issues with inconsistent frames of discernment (FOD). Developed a method that yields more reasonable results compared to existing methods in cases of inconsistent FOD, by treating IVMF as a geometric mean of first-order and higher-order information volumes. Demonstrated the efficacy and rationality of the proposed IVMF through a series of numerical examples and an application in threat assessment.
- Proposed an improved Information Volume of Mass Function (IVMF) based on the Plausibility Transformation Method (PTM) to address issues with inconsistent frames of discernment (FOD).
- Developed a method that yields more reasonable results compared to existing methods in cases of inconsistent FOD, by treating IVMF as a geometric mean of first-order and higher-order information volumes.
- Demonstrated the efficacy and rationality of the proposed IVMF through a series of numerical examples and an application in threat assessment.
- 2024.04 - 2024.05
Bayesian Deep Q-Networks for Suika Game
Modeled an MDP problem from open-source game code, extracted relevant game data, and studied Bayesian Deep Q-learning algorithms by reviewing related papers and code. Implemented and enhanced Bayesian Q-learning algorithms within the game environment, integrating Thompson sampling to account for better probability estimation.
- Modeled an MDP problem from open-source game code, extracted relevant game data, and studied Bayesian Deep Q-learning algorithms by reviewing related papers and code.
- Implemented and enhanced Bayesian Q-learning algorithms within the game environment, integrating Thompson sampling to account for better probability estimation.
- 2023.07 - 2023.07
Pneumonia Detection Breast Cancer Segmentation With Att-UNet
Implemented the UNet and Attention-UNet using PyTorch, based on thorough research into CNN and UNet principles, achieving over 95% accuracy on the validation set, successfully deploying image segmentation algorithms on original image classification task. Enhanced the training dataset through data augmentation to improve training effectiveness, ensuring better performance with limited initial data. Collected and visualized training logs.
- Implemented the UNet and Attention-UNet using PyTorch, based on thorough research into CNN and UNet principles, achieving over 95% accuracy on the validation set, successfully deploying image segmentation algorithms on original image classification task.
- Enhanced the training dataset through data augmentation to improve training effectiveness, ensuring better performance with limited initial data.
- Collected and visualized training logs.
- 2024.04 - 2024.05
Implementation of a Text Generation Model Based on TRANSFORMER structure
Implemented a decoder-only transformer-based text generation model (GPT-2) with self-attention mechanisms to capture data patterns and dependencies. Analyzed model performance through detailed experimental results, discussing strengths in generating coherent text and identifying areas for enhancement in text generation tasks.
- Implemented a decoder-only transformer-based text generation model (GPT-2) with self-attention mechanisms to capture data patterns and dependencies.
- Analyzed model performance through detailed experimental results, discussing strengths in generating coherent text and identifying areas for enhancement in text generation tasks.