Author of the publication

Base station density optimization for high energy efficiency in two-tier cellular networks.

, , , , , and . GLOBECOM, page 1804-1809. IEEE, (2014)

Please choose a person to relate this publication to

To differ between persons with the same name, the academic degree and the title of an important publication will be displayed. You can also use the button next to the name to display some publications already assigned to the person.

 

Other publications of authors with the same name

Inverse Reinforcement Learning for Trajectory Imitation Using Static Output Feedback Control., , , , and . IEEE Trans. Cybern., 54 (3): 1695-1707 (March 2024)Off-policy inverse Q-learning for discrete-time antagonistic unknown systems., , , , and . Autom., (September 2023)Inverse Reinforcement Learning for Adversarial Apprentice Games., , , and . IEEE Trans. Neural Networks Learn. Syst., 34 (8): 4596-4609 (August 2023)A resource allocation algorithm using compensation timeslot for self-healing in heterogeneous networks., , , and . PIMRC Workshops, page 122-126. IEEE, (2013)Flotation process with model free adaptive control., , and . ICIA, page 442-447. IEEE, (2017)Base station density optimization for high energy efficiency in two-tier cellular networks., , , , , and . GLOBECOM, page 1804-1809. IEEE, (2014)Data-Driven H∞ Optimal Output Feedback Control for Linear Discrete-Time Systems Based on Off-Policy Q-Learning., , , , , , and . IEEE Trans. Neural Networks Learn. Syst., 34 (7): 3553-3567 (July 2023)A dynamic affinity propagation clustering algorithm for cell outage detection in self-healing networks., , , and . WCNC, page 2266-2270. IEEE, (2013)Inverse reinforcement learning for multi-player noncooperative apprentice games., , , and . Autom., (2022)Robust Inverse Q-Learning for Continuous-Time Linear Systems in Adversarial Environments., , , and . IEEE Trans. Cybern., 52 (12): 13083-13095 (2022)